Breaking

Saturday, March 31, 2018

3/31/2018 11:06:00 PM

Macs finally support external graphics cards with macOS High Sierra 10.13.4

Sadly, Nvidia support is nowhere to be found


External graphics card support, a feature promised by Apple since the launch of macOS High Sierra back in September 2017, has finally arrived via version update 10.13.4 that is available now.

Apple has detailed how the feature works through a support page on its website, noting that this function only works with Macs that support Thunderbolt 3 connectivity. So, that means MacBook Pro models released since 2016, iMac models since 2017 and the brand new iMac Pro.

Of course, you’ll also need this update installed, which is available through the Mac App Store.

Having an external graphics card, or eGPU, connected to your allows for far more functionality than just improved graphics brunt, however. Here are the highlights of what the feature allows for, straight from the horse’s mouth:

  • Accelerate applications that use Metal, OpenGL, and OpenCL
  • Connect additional external monitors and displays
  • Use virtual reality headsets plugged into the eGPU
  • Charge your MacBook Pro while using the eGPU
  • Use an eGPU with your MacBook Pro while its built-in display is closed


While that’s more than perhaps many were expecting from this change, there is one glaring shortcoming of the feature.

Nvidia is a no-show

Sadly, the list of supported graphics cards is rather small, and even at that the list of graphics card enclosures that support each model is even smaller. Without getting buried in the minutiae, which you can find on Apple’s support page, here are the supported graphics cards:

  1. AMD Radeon RX 570
  2. AMD Radeon RX 580
  3. AMD Radeon Pro WX 7100
  4. AMD Radeon RX Vega 56
  5. AMD Radeon RX Vega 64
  6. AMD Vega Frontier Edition Air
  7. AMD Radeon Pro WX 9100


Notice something missing from this list? That’s right, Nvidia’s graphics cards are nowhere to be found. Apple makes no mention of neither Nvidia nor its products within this support page detailing eGPU support.

So, regardless of the wattage of your eGPU enclosure, we certainly wouldn’t recommend trying out Nvidia graphics cards with your Mac computer. (Also, don’t try using eGPUs while running Windows in Boot Camp – Apple notes that this is not supported.)

It’s unclear as to why Apple has omitted Nvidia support entirely from its eGPU feature, but considering that none of its iMac or MacBook Pro models offer the option, it makes a little more sense. This is a massive boon to users wanting to game and get creative on Mac, but here’s to hoping that the list of supported hardware is widened out in the future.



3/31/2018 10:33:00 PM

Sensors and machine learning: How applications can see, hear, feel, smell, and taste

All five senses take the form of some kind of sensor and some kind of mathematical algorithm, usually a supervised machine learning algorithm and a model.


Through the power of deep and machine learning, faster CPUs, and new types of sensors, computers can now see, hear, feel, smell, taste, and speak. All these senses take the form of some kind of sensor (like a camera) and some kind of mathematical algorithm, usually a supervised machine learning algorithm and a model.

Here is what is available.

Recent research into the image and facial recognition lets computers not only detect object presence but detect multiple instances of similar objects. Facebook and Google have really been leading the way here, with multiple open source releases. Facebook has stated that it has a goal of detecting things in the video.

This area has come a long way in recent years, objects in an image can be segmented from other objects. However, just because you found something and can segment it from something else doesn’t mean you know what it is; that requires training a model that recognizes those things.

There are powerful tools, but they are extremely data-hungry. So Facebook and Google can release them, gain the benefits of research and community developed derivatives, and not worry too much about competition in this area. Simply put, few organizations have millions or billions of images to put through them and let alone the computing power to spend on it.

In essence, classifying objects with machine or deep learning is first a matter of “seeing” a lot of instances of a sheep or a cat, including various derivatives (big ones, little ones, furry ones, less-furry ones, skinny ones, fat ones, tailless ones). Then it is a matter of training a model that recognizes all of the variants.

While Facebook and Google are clearly putting the most weight into this field, there are other tools like the venerable OpenCV library, a grab-bag of functionality, and OpenFace, which is focused on just facial recognition.

There is even Jevons (French for “I see”), a smart camera for Arduino devices that has trained models based on open source libraries. It’s trained to recognize about 1,000 different objects. You can obviously tweak things with your own models. So your plan to create an autonomous quadcopter that flies itself around is indeed possible!

Hear: Speech recognition and sound classification

Much of computer “hearing” is focused on speech recognition. However, sound classification is possible. Obviously, this exists because Shazam is a thing, but the models for general sound classification aren’t quite as available or broad as you’d hope. Still, PyAudioAnalysis lets you take a .wav file and classify sounds.

Did you capture birdsong or road noise? Like image recognition, this means training a classification model. This area seems less invested in. Maybe because Facebook is largely run on mute and while there is a video.google.com and an images.google.com, there is no sounds.google.com.

For speech recognition, you can find open source implementations that use the more traditional Hidden Markov models like CMUSphinx and Kaldi use a neural network. There are other implementations, but the breakdown is between online and offline decoding. “Online” means can you read off a mic; “offline” means you have to wait until you have a .wav file.

The major vendors—IBM, Google, Apple, and Microsoft—all have their respective implementations. Google’s is quite good. Heck, you can even do some speech recognition in the browser with JavaScript.

Feel: one sense with little public technology
When it comes to touch, not much seems to have happened in terms of detecting how something “feels” using touch sensors. Mainly these are used in control applications (like the old Nintendo Power Glove everyone wanted but never got and apparently didn’t work all that well).

There are ”did you touch it” sensors for Arduino and libraries and sensors for detecting gestures. Probably the most promising “did you touch it” innovation is capacitive woven fabric. However when it comes to the more practical machine “touching the surface to see if there is a defect” most applications are optical or ultrasonic.

Smell: the electronic nose

Yes, computers can smell. Yes, there are practical uses for this. And the “electronic nose” has been around for a while.

For the cheap version, in essence, a sensor is tied to an Arduino device and “inhales” gases. Based on the volume of gases present, it can ”detect” things like which hops are used in a beer or whether the air is becoming toxic. These technologies have been used for everything from bomb-sniffing to quality control.

Taste: another sense with little public technology

What is taste to a computer? This is subjective, and remember that a lot of human taste is actually smelled. The sensors here are chemical, microbial, PH, and titration sensors. The practical applications are wide, such as detecting if you’re sick, if you have adequate glucose levels, or if something is poisoned.

There is again a big overlap with smell, just as in human anatomy. There is the least amount of public source code here, and training a model probably means having access to a chem lab or a data from one.

No, you can’t build Commander Data yet

With the five senses covered, can we build Star Trek: The Next Generation’s Commander Data yet or at least his stupid cousin B4 (since we don’t have AGI yet)? Probably not yet. Even if you have the sensors and libraries, we’re still a bit away from having fully trained models everywhere. Also, this stuff is data-hungry and much of it is a bit too slow for practical real-time use.

As a result, we are still working towards practical facial recognition in video. Touch is mainly “did you touch it?” or other single-purpose sensors. Smell is much the same, and the taste is more.

Still, like much of machine and deep learning, as long as you have a single-purpose application (like, is the coffee rotten?), AI and sensors have come a long way. Maybe computers aren’t up to humans’ level of the five senses but they do have these sensors and there are widely available implementations both free and proprietary for developers to use.



3/31/2018 09:26:00 PM

Nvidia redefines autonomous vehicle testing with VR simulation system

The Drive Constellation simulation environment allows for sensor data to be processed as if it were coming in from sensors on a real car cruising the streets.


Nvidia has on Tuesday announced Drive Constellation, a cloud-based system for testing autonomous vehicles using photorealistic simulation via virtual reality, aiming to speed up the delivery of autonomous cars in a safer and more scalable way.

According to Nvidia senior director of Automotive Danny Shapiro, every two minutes, four to five people die in vehicle-related accidents -- totaling 3,000 people per day globally.

"This is a big problem, so we're really focused on bringing the hardware and software to solve the challenge," he said.

Drive Constellation is a computing platform based on two different servers. The first run the Nvidia Drive Sim software to simulate an autonomous vehicle's sensors, such as cameras, lidar, and radar; while the second boasts Nvidia Drive Pegasus, which is an artificial intelligence car computer that runs the autonomous vehicle software stack and processes the simulated data as if it was being fed in from sensors on a real car.

The Drive Sim software also generates the photoreal data streams to create a range of different testing environments, including natural occurrences such as storms, snow, high glare, low light, and different types of road surfaces and terrain.

"Essentially, we're running the complete hardware/software solution that would normally be in the vehicle, but we've moved it to the datacentre," Shapiro explained.

"The output from the simulator goes into the Drive Pegasus, performs its deep learning operations, it senses the environment ... and then instead of actuating the steering wheel on a real vehicle with the accelerator/brake, it sends those commands back to the simulator."

Driving commands from Drive Pegasus are fed back to the simulator 30 times per second.

This allows for the simulation of dangerous situations to drive the vehicle for billions of kilometers and test the autonomous car's ability to react, without putting an individual in harm's way.

"Self-driving is hard, we recognize that and I think it becomes more and more apparent every day as companies are out there really trying to solve this challenge," Shapiro told press at a briefing event.

"We're working very hard on this ... we've got teams developing both the hardware and the software. It requires a massive amount of computing, so Nvidia is really the ideal company to bring the hardware and the deep learning software to solve this challenge."

Nvidia CEO and founder Jensen Huang demonstrated the platform during his keynote on Tuesday, and announced the availability of Drive Constellation by the third quarter of this year.

"It's incredible what advances are going to be able to be made by our customers, being able to really accelerate the development and refine it," Shapiro added.

"RAND Corporation has put out a study that said we need to have billions of miles of driving to test these cars are safe for humans, but the reality is to do that would take more than our lifetime.

"Instead, by doing it in simulation, being able to have GPUs generating sensor data and feeding that back and testing it, our customers will be able to quickly refine their algorithms and accelerate their development."

Over the last year, Nvidia's automotive business has grown, with Shapiro highlighting that the company's ecosystem comprises virtually anyone developing something to do with cars, trucks, and delivery vehicles.

"The size of that ecosystem is over 370 different companies -- these are the car makers, the tier one suppliers ... but also we've got over 200 startups that are developing on our platform," he said. "We've seen such incredible growth in this ecosystem."

Volkswagon, Toyota, and Volvo are just a few manufacturing giants that have moved onto Nvidia's platform in the past year.

NVIDIA SUSPENDS ON-ROAD SELF-DRIVING TESTING

The importance of safety with autonomous vehicles was highlighted last week; Uber suspended its driverless vehicle testing in Tempe, Pittsburgh, San Francisco, and Toronto, when a female pedestrian died after being struck by an Uber car operating in an autonomous mode in Tempe, Arizona.

Tempe police reported that the car was in autonomous mode with a human safety driver at the wheel, which is required by law when it struck the pedestrian who walked into the street with her bike.

A Nvidia spokesperson on Tuesday told ZDNet it too has paused on-road trials in the wake of the tragedy.

"The accident was tragic. It's a reminder of how difficult SDC technology is and that it needs to be approached with extreme caution and the best safety technologies. This tragedy is exactly why we've committed ourselves to perfecting this life-saving technology," the spokesperson said.

"Ultimately AVs will be far safer than human drivers, so this important work needs to continue.

"We are temporarily suspending the testing of our self-driving cars on public roads to learn from the Uber incident. Our global fleet of manually driven data collection vehicles continues to operate."

Huang said Nvidia is dedicating itself to this problem, to make self-driving vehicles the safest option in the future.



3/31/2018 09:22:00 PM

How Spotify for Apple Watch is more important than just music on your wrist

It's all down to Apple's StreamKit.


Earlier this week we heard that Spotify may be coming to the Apple Watch 3 in the watchOS 5 update, but the introduction of the app may signal something much more exciting for the company's range of smartwatches.

The rumor comes from an unverified tipster speaking to MacRumors, which claims Apple will use WWDC 2018 (coming up in June this year) to show off the app for the first time.

While it's exciting to see a third-party music streaming service debut on the Apple Watch, it marks a far more important introduction of tech working on the Apple Watch.

The rumor suggests Apple will introduce its next StreamKit framework within watchOS 5 that will allow third-party developers to make use the cellular features on the Apple Watch.

Big changes for Apple Watch

Only the one version of the Apple Watch 3 can currently connect to a cellular network, and the features that offers are limited to apps provided by Apple.

This new software change will allow developers to push notifications directly to your wrist. That'll mean you'll be able to get messages and more through to your watch without having to have it go through your phone directly and you'll be able to leave your phone at home more often.

Take Spotify, for example, where you'll be able to select the track you want to listen to from the watch and be able to stream it through your Bluetooth headphones without the need for your phone.

That's been possible for a few months with Apple Music, but this marks the first time third-party developers have been able to make apps that can do that directly for your watch.





Apple Music streaming to the Apple Watch 3

Arguably this change may make it even more possible for you to leave the house and not have to take your phone with you at all. You'll be able to receive phone calls and texts like normal, plus also have your favorite apps send you notifications.

It could change the way you use the Apple Watch on a daily basis.

It'll especially be useful when exercising if fitness apps embrace the new StreamKit tech to be able to work while on the move without the phone on you.

Apple may change the name of StreamKit by June ready for the launch, but the leaks so far suggest the features will stay the same and we hope we'll see developers making use of it straight after the conference.

That's the main issue here though. More and more developers are dropping support for Apple Watch apps, we've seen Twitter, Amazon, eBay, Google Maps and Slack all drop support over the last few years.

While those apps may not embrace the StreamKit features right away, we're sure to see a variety of other alternatives apps embrace it and the new features available on the Apple Watch may even encourage some developers who dropped support to return.


It may also be the first time you can download apps specifically for your Apple Watch. Currently you download them packaged with a phone app, but if you can use your watch without your phone, it may be you can now just download certain apps and games just for your watch and not have them on your iPhone.

We're expecting to see an Apple Watch 4 launch toward the end of 2018, so we may see some new and improved cellular functionality in that device too.

It'll likely drop the price of the Watch 3 so more and more people will be able to embrace these true wireless watchOS features too.




3/31/2018 07:19:00 PM

Veritas acquires cloud data management firm fluid Operations AG

The deal has been made in the hopes of improving Veritas' smart data management portfolio.


Veritas has announced the acquisition of fluid Operations AG to assist customers in achieving a return on investment when it comes to data.

On Monday, Veritas said that the purchase of privately-held fluidOps will "accelerate Veritas' mission to enable customers to harness the power of their data -- regardless of where that data is housed -- while driving insights that can lead to competitive advantage."

Financial terms of the deal were not disclosed.

Founded in 2008, Walldorf, Germany-based fluidOps specializes in data management tools for orchestrating, integrating, and optimizing both structured and unstructured data sources in silos whether on-premise, public, private, or hybrid cloud environments.

The company has secured $4 million through Series B funding since launch.

Veritas says that many enterprises struggle with managing, analyzing, and turning data into actionable intelligence given the range of storage options in use.

While cloud technologies offer companies to securely store data, outsource IT requirements, and reduce operational workloads, bringing structured and unstructured data together for ROI can be a challenge.

Mike Palmer, CPO of Veritas said the deal will provide "new levels of orchestration and semantic integration that is critical for today's global enterprises."

In addition, Veritas hopes the deal will further the company's ambitions in the areas of artificial intelligence (AI) and machine learning (ML).

Fluidops' portfolio will be integrated into both "current and future" data management solutions, beginning with NAS solution Veritas Access 7.4, which will be available later this year.

"We are delighted to join the Veritas team," added Dr. Andreas Eberhart, co-founder of fluidOps. "For years, fluidOps has focused on providing customers smart data management solutions for heterogeneous data pools, empowering organizations to bridge data silos, benefit from data transparency and accelerate innovations in a hybrid cloud world."


3/31/2018 04:15:00 PM

Redstor: Building a data management suite in the cloud

Q&A: Redstor has moved from backup into disaster recovery and archiving in the cloud.


Redstor is a data management software as a service business, offering backup, disaster recovery, and archiving in the cloud.

Its customers include the law firms Travers Smith, Hill Dickinson, and Nelson, the service management company Vivantion, as well as Capita, and Maxell. According to the company, to date, it has some 40,000 customers, ranging from sole traders to large international companies.

ZDNet talked to CEO Paul Evans and technical services director Thomas Campbell to find out more.

Redstor CEO Paul Evans.

Evans: Redstor was set up in 1998. In those days, we sold big solutions to enterprise customers, and our work went from putting in the first storage area networks (SANs) to developing high-availability systems, disk systems, backup and recovery systems, and so on.

Then we moved the business into managed services, which meant handling the backup and storage administration for our customers. But we had always had this vision that you would one day be able to manage your data like you manage your electricity, what we call 'data on tap'.

We wanted to build our own big storage platform and backup platform, and that was when we came across a company called Attix 5. It was 2004. They had this great piece of software which was designed for the cloud before people were calling it the cloud. They were backing-up to disk and it was multi-tenanted.

We then evolved the business into this big cloud backup platform and that grew really nicely. As the business evolved further, we were working with partners and, from there, we came across the education sector where we found a supplier looking to offer an online, backup service to schools.

That was version two of Redstor.

Now we had the intellectual property (IP) and we had realized pretty damn quickly that the IP was really good and, while the focus had been on backup, we now realized that there was a lot more to it than just backup.

Backup sits right in the middle of all the data flows. With backup, you can put an agent in every machine that you are looking to backup, and that makes it a single, central repository of data.

We had DR capabilities as well -- the ability to spin up a server in the event of a disaster. We had two-in-one protection.

Now, we want to talk about archiving. We are moving away from just back up to building a complete data management story.

We feel that a lot of enterprise customers are facing big challenges with their data. They've got lots of people working on it. They've got lots of tools. But we felt that there wasn't a complete data management tool which did it for everybody, was easy to use, affordable and easy to understand.

That's what we're building -- a complete data management suite for everyone. When we bought Attix 5, the first thing we did was completely rebuild the software and we called that Redstor Pro ESE.

The software's built in a modular way. Rather than having fixed releases, we went with releasing regular Agile releases. We made all the software modular with no technical debt whatsoever. It's all fresh code.

So where did the idea for the software come from?

Evans: I would like to tell you that it was all a cunning plan, but it actually came about by accident.

Someone came up and knocked on our door in Reading and said: "We're Attix 5, we've got this great software, it backs up to disk, it's multi-tenanted and it's priced in such a way that it's affordable."

Since we had the right price, we were able to create an online backup service that was priced in such a way that you could economically deliver it as a service. The other backup services that existed at that time were too expensive to provide an affordable service.

That's how we came across Attix 5 and we have always had this close relationship with them. Then, we wanted more control over the direction of travel the software was taking, so in 2015 we bought them.

One reason was that at that time our customers were telling us that they wanted faster releases so that they could get more functionality, more quickly, and so we moved onto a monthly release cycle.

Because we had our cloud platform, we got massive amounts of feedback on how to improve the product. Our customers really liked that, but they also liked that they were in control of the whole stack from software development through to the platform, through to offer really good customer support.

Because the customers are in the whole stack, their feedback gets through quickly to us and equally quickly back to them.

We have gone from being a little company based down in Reading to becoming an international software company focused on data management. We have customers in South Africa, North America, the Middle East, Australasia, and we're growing.

What's your unique point of difference?

This all-in-one approach is one of the key things that we have been doing. We are the backup and DR which was two-in-one, and now we are adding more functionality with archive, which is three-in-one.

We are doing things like offering a companion to archiving which can work out what data can be archived. That's the start of our data insight journey and that makes it four-in-one.

We are going to be bringing out a search and discovery app. Now, the great thing about backup is that you have always pulled your data back to a storage platform, a single repository. What better place to start your global search and discover that a single repository?

This is a really interesting area for us -- the ability of our customers to search all of their data in one place and/or to identify where that data came from. Because if you were to put the restore onto, theoretically, all of the data that was to be backed up then all of those individual data stores would be in that single repository.

What sort of interface are you going to have in this repository? Is it going to be highly technical or simple to use?

Campbell: We are trying to make it as easy to use as possible. Typically, it will take about 30 minutes to show someone how to do a restore. And then as far as all the machines are concerned and the connections between them, you can manage it all through a single interface.

It's a long way from the experience where you needed a two-week training course. Instead of leaving people to try to learn and manage all the intricacies of it, we have tried to make it really simple.

But are you going to leave the choices up to the customer because some will want everything in one place but some will want it diversified?

You're right, but for our standard offering, we will always keep two models of the data for our own. If they are a UK customer they can send the data to our datacentre in Slough and it gets replicated to our center in Reading.

We have lots and lots of options around keeping the data locally and they have the option to keep the data locally themselves and options around being able to do restore more quickly.

We've also done loads of work to make the experience of moving the data to the cloud platform as fast and efficient as possible -- one of the major things we've done is to use some functionality within the data.

What we do is make it appear that the data is recovered by using sparse files. We had a customer who had a RAID set failure and lost terabytes of data. Now what they can do is drop that entire dataset to disk. It takes a couple of minutes to create the sparse files on disk and immediately, from that point, you can start accessing those files on disk.

Instead of waiting, as you used to have to, for terabytes of data to come over from disk, you can now, within a couple of minutes, get things up and running again. And that's all happening over the internet.


3/31/2018 01:24:00 AM

Mobile Review: Huawei P20 Pro review

Three Leica cameras on one phone


FINAL EARLY VERDICT

The Huawei P20 Pro looks to be a significant upgrade on what came before it, with genuinely useful camera innovations, a top-end design and a significant upgrade to the internals thrown in for good measure.

FOR

High-end design
Powerful camera

AGAINST

No headphone jack
Thick bottom bezel
Huawei P20 Pro deals

The year of the notch continues, with the Huawei P20 Pro the latest flagship device to sport one; although to distract from its adopting the design now synonymous with Apple's iPhone X, Huawei is focusing on the P20 Pro’s rear camera prowess.

Announced alongside the Huawei P20, the P20 Pro comes with a much higher spec, including three cameras on the rear that is set to be a big part of the marketing for the new handset.

See all Huawei P20 Pro deals

As well as those three cameras, everything else here looks high-end too. We’ve had a little time to try out the handset, and we’ve put together our first thoughts in this Huawei P20 Pro hands-on review.

Watch our hands-on video review of the Huawei P20 Pro below


Huawei P20 Pro release date and price

The Huawei P20 Pro will be on sale in the UK from April 6, but the company has no plans to bring the phone to the US. We don't currently know if it'll be coming to Australia either.

It's expensive at £799 (about $1,110, AU$1,450) with deals for the phone starting at around the £40 a month with 1GB of data to use. We've seen deals in the UK from the main networks like EE, Three, Vodafone, and O2, plus you can also buy it directly from Carphone Warehouse too.

Design and display








 The Huawei P20 Pro is undeniably one of the best-looking and best-built phones the company has ever made.

Unlike the Huawei P10 Plus, the P20 Pro features a glass back that sits comfortably in the hand. It has rounded edges on the rear so it nicely sits in your palm, and it feels like the optimum size for a smartphone.

It’s a touch bigger than the Huawei P20, but we found it easier to hold because of the curved rear and a smartly thought-out design. There are metal edges to the device, but these are also rounded, and don’t poke into your hand.

The power button sits on the right-hand edge and the back of the phone is plain, apart from the branding emblazoned down the edge and the three cameras at the top of the handset.

Unlike the Huawei P20, this phone is waterproof, with an IP67 rating. That means it’ll survive the odd accidental dunk in the sink, but Huawei has also given this as an excuse for not including a 3.5mm headphone jack.

If you want to use wired headsets with the phone, you’ll have to use a dongle that’ll be included in the box.

Color options are black and midnight blue for the standard design, while there’s a 'gradient' finish on both the twilight and pink gold options. That's Huawei's name for the finish, which shows a spectrum of colors depending on what angle you're looking at the device from, and the lighting.

We particularly liked the twilight version, as it looks different to anything we’ve seen on a phone before, and the gradient effect isn't as impressive on the pink gold version.






The P20 Pro has a huge 6.1-inch 18:9 aspect ratio OLED display, but thanks to its almost bezel-less design it doesn’t feel like a phone with that large a screen. The display is Full HD+, so it doesn’t look as stunning as the Samsung Galaxy S9 display, but it’s still a looker and grabs your eye immediately when it’s turned on.

At the top of the screen is the notch, which houses the front-facing camera and speaker. If you want you can hide the notch by replacing the screen on either side of it with an on-screen black bezel on which the time and your notifications are displayed; you'll lose a little in terms of overall screen size, but you may prefer the cleaner look.

Performance and specs


Inside the Huawei P20 Pro, there’s a Kirin 970 chipset. We saw this perform well in last year's Huawei Mate 10 Pro, so we hope its successor will be able to keep pace with the other flagship phones announced this year.

That chipset includes a neural processing unit, which Huawei is putting a big focus on for both the P20 and P20 Pro, as it's at the heart of the artificial intelligence features inside the new phones.

A lot of the AI improvements here are within the camera, allowing it, for example, to automatically detect the type of scene you’re shooting.

We noted that the phone itself was snappy in our limited tests, but we can’t currently comment on how fast it’ll work for day to day shooting.

There’s only one version of the P20 Pro around the world, and it comes with a huge 6GB of RAM working away behind the scenes and 128GB of storage, so you should have plenty of space for all your media and apps.

Android 8.1 Oreo software is on board here, but it looks different to what you’ll see on other phones as it comes in the form of Huawei’s own Emotion UI 8.1. 

That’s a very specific overlay that comes with a few extra features, including apps like Huawei Share (which allows you to transfer data between your phone and PC easily), but also has a very specific look that not everyone loves.

There’s a 4,000mAh battery inside the P20 Pro, which should perform well considering the phone has a well-optimized chipset, so we’ll be sure to push it to its limits when it’s time for our full review.

Huawei's Super Charge fast-charging technology is packed in here too, so you should be able to charge your phone up speedily as with previous Huawei handsets.

Camera







Huawei is putting a lot of its eggs into the camera-shaped basket with the P20 Pro. The Leica partnership is continuing, so the setup here is built in collaboration with that company, and it's the first time we’ve seen three cameras on the rear of a phone.

Why would you need three cameras you may well ask? The thinking is similar to that behind the P10 and other phones before it, but now Huawei has added a telephoto lens as well.

Huawei has built numerous cameras with both a color and black and white sensor working in tandem to get photos with improved depth and definition, and now that telephoto lens will also let you zoom with no loss of image quality.

The RGB (that’s color) lens is a whopping 40MP this time around, and that works alongside a 20MP monochrome sensor. For your normal, average automatic mode picture it’ll combine the images from the two lenses.

Above the RGB lens, which sits in the middle of the array, you’ll find an 8MP telephoto sensor. This allows for up to 5x lossless zoom, and while we’ve seen similar lenses on previous handsets from different companies, in our limited testing the P20 Pro's take seemed impressive.

We played around with the zoom feature, which is easy to access within the camera app, and we were unable to see a quality difference in zoomed shots compared to wide-angle ones on the phone screen.

We’ll dig into the other camera improvements during our full review, but if you want the top-end camera from Huawei you’ll have to opt for the P20 Pro rather than the P20.

There’s a huge 24MP selfie shooter on the front of the phone, but we’ve not had much opportunity to play around with it yet to discover how it compares to other front-facing cameras.

Early verdict

The Huawei P20 Pro is putting such a big focus on the camera improvements that the other elements of the phone seem comparatively limited in terms of upgrades. That said, everything looks up to scratch, and this is the best-looking phone from Huawei yet.

The camera is the star here though, and if you want one of the best shooters on an Android phone the P20 Pro may well be the phone to go for – we’ll have to do some further testing first to find out what it can really achieve.


SOURCE BY TechRadar


3/31/2018 12:45:00 AM

iOS 12 release date, news and rumors

iOS 12 is expected to debut on June 4


The iOS 12 release date is the next big update for your iPhone and iPad now that everyone has iOS 11.3, likely the last update with a lot of front-facing features. 

We're expecting iOS 12 to debut June 4 at WWDC 2018. That shouldn't surprise you by now – Apple launches its big iOS update at the same time every year. 

Its new software features, however, are always filled with some unexpected and exciting news. We fully expect groundbreaking ideas, but also a healthy dose iOS 11 fixes given our many, many ongoing iOS 11 problems.

With the launch of iPhone X last year, the theoretical iPhone X2 release date happening later in 2018 and, in between, the Pencil-compatible new iPad 2018 launching into more hands, Apple seems poised to make more big changes.

Here's our list of what we expect from iOS 12, given leaks and rumors about the next big mobile operating system update for the iPhone and iPad.

Cut to the chase

  • What is iOS 12? Apple's next big iPhone and iPad software update 
  • When is the iOS 12 release date? The announcement on June 4 with a subsequent beta,  September launch 
  • How much will iOS 12 cost? Nothing. iOS 12 will remain free.


iOS 12 release date

The iOS 12 release date is June 4, 2018. At least, that's when we first expect to see the changes Apple is making to its mobile software during its WWDC 2018 keynote.

That's about two months away. Apple typically announces its new iOS update during this developers conference and issues the first developer beta within the next week. It also issues a public beta, previously also launched in June, for every non-developer willing to test it out.

Apple needs these betas more than ever for iOS 12, as it's been plagued with so many iOS 11 problems. The company is likely to continue with the same beta rollout schedule since it values this feedback from so many users.

The actual iOS 12 release date for everyone else is expected to be in September, right as the iPhone 9 and maybe the iPhone X2 launch. We don't yet know the names of Apple's next phones, but rumors point to a cheaper version of the iPhone X with an LCD screen and an iterative update by the way of the X2.

iOS 12 to focus on reliability over big changes.

"iOS 12 just works," maybe Apple's big message about its next iPhone update, as it's reportedly focusing on reliability and shelving many exciting features.


There have been so many glitches and bugs with the current mobile operating system that the team said to be working on the software allegedly got a directive to drop refreshes to the Camera, Mail, and Photos app to work on stabilization. We may also miss out on a planned home screen redesigned.

This is both good and bad news if you were looking forward to iOS 12. There may be fewer front-facing features, but your iPhone may reset less. It's hard to argue with that.

iOS 12 apps and macOS together at last


One of the biggest new iOS 12 features may actually be for your computer: Apple may bring first and third-party iOS apps to your Mac computer. Why can't you control your smart home with the Home app via that all-powerful iMac Pro? It's a ridiculous notion.

Apple is rumored to be allowing developers to expand their app ecosystem to the forthcoming macOS 10.14 update. Apple's own apps, like Home, are also said to be finally making the jump, according to Bloomberg.

More Animojis in more places (like iPad)


Whether you demanded it or vehemently opposed it, Apple is due to bring more Animojis to iOS 12 for use with the iPhone X Face ID camera. The navigation of these character masks should get easier too, according to Bloomberg.

Apple's Animoji character may make two jumps. First, the natural jump to FaceTime for video chats behind a virtual panda, robot and poop mask. Second to what may be a new iPad Pro 2018 with a Face ID camera. We've seen some evidence of an updated iPad recently, so that makes sense.

Here are more features we wish were coming to iOS 12...

What we want to see

While nothing is known about iOS 12 yet we have a clear idea of some of the things we want to see, such as the following.

1. Wi-Fi and Bluetooth toggles that work properly



Control Center has been improved for iOS 11, but one thing we’re not such fans of is the fact that you can’t actually turn off Wi-Fi or Bluetooth from it. 

Tap either toggle and your device will disconnect from Wi-Fi networks and Bluetooth accessories, but won’t actually turn off their radios.

There are good reasons for this, as it ensures accessories like the Apple Pencil and Apple Watch 3 will continue to work, as well as features such as AirDrop and AirPlay, but there are also plenty of reasons you might want to fully disable them.

Currently to do that you have to head to the main Settings screen, so in iOS 12 we’d like there to at least be an option to have proper ‘off’ toggles in Control Center - perhaps with a harder 3D Touch?

2. Wishlist returned to the App Store

The App Store has been overhauled as part of iOS 11, and for the most part, it’s for the better, but one feature has been killed off in the process, namely wish lists.

Previously, if you saw an app or game you liked the look of but didn’t want to buy it then and there (perhaps because the price was high or you were using cellular data) you could add it to your wish list so you wouldn’t forget about it.

You can’t do that anymore, and nor can you see your old wish list, so good luck remembering anything you’ve added. It was a handy feature and we’d like to see it – along with everything we added to it – returned in iOS 12.

3. Camera controls in the camera app



People often talk about how intuitive iOS is, and for the most part, they’re right, but there are some aspects which really aren’t - namely the camera controls, or rather their location.

If you’ve not used iOS before you’d expect to find them in the camera app, but some, including video resolution, file formats and whether or not to show a grid, are instead on a submenu of the main Settings screen, meaning you have to actually leave the camera app and make several additional taps to change them.

It also means that some users may not even know they exist, especially since some controls are housed in the app, so you might reasonably assume that they all are. We really want to see this changed for iOS 12.

4. A movable back button


When moving around apps in iOS you’ll often want to go back to a previous screen, and as there’s no hardware back button you instead have to tap a tiny option in the top left corner of the display.

This isn’t ideal if you’re right-handed, as it can be a bit of a stretch when using a larger device such as an iPhone 8 Plus, so we’d like to see its position become customizable in iOS 12.

5. More powerful Files

Files promised to be a file explorer and manager for iOS, bringing it closer to a desktop experience, or at least to Android levels of control. But in reality, the first time you open Files you probably won’t see much of anything.

You can connect cloud drives to it, but anything locally stored won’t be visible unless you manually save it to Files. It makes the app a bit confusing and clunky and means you never have a true view of your system’s files and folders. 

For iOS 12 we’d like to see it turned into a proper file manager.

6. More Control Center customization

With iOS 11 Apple has let you pick what you see in Control Center, but its selection isn’t comprehensive. 

We’d love the power to add any setting or app shortcut we want, and also to remove the likes of music controls and screen mirroring, which currently you can’t.

7. System-wide autofill


Password managers are a fast, secure way to log into your various apps and accounts. Or, they’re secure anyway, and on most devices, they’re fast, but not always on iOS.

That’s because for a password manager to autofill the login fields of an app, the app’s developer has to have manually enabled it, which few have.

Apple has somewhat improved things by adding a ‘Password Autofill For Apps’ feature to iOS 11, which does exactly what the name suggests, but only for passwords, you’ve stored with Apple.

Apps still can’t tap into your favorite password manager automatically, so the first time you log in to them you’ll have to either type out your username and password manually or copy and paste.

On a computer or Android phone, the password manager experience is seamless. On iOS it’s anything but, so we want this fixed for iOS 12.




Friday, March 30, 2018

3/30/2018 10:30:00 PM

Adobe adds more AI control, customization, transparency in Adobe Target update

Adobe Target will get more AI management tools so enterprises can have more transparency and customization of algorithms. Adobe Campaign also gets key email creation updates.


Adobe is aiming to give marketers more control and transparency over targeting algorithms, as it updates Adobe Target tools.

The company's Experience Cloud, which encompasses Adobe's Marketing Cloud, is now managing 233 trillion customer transactions a year, with more than 150 billion emails sent through Adobe Campaign. A customer transaction includes a link click, web visit, opens and other items included in a digital footprint.

Meanwhile, Adobe has been adding more automation and artificial intelligence in with each update. The challenge for enterprises and business leaders is that they are increasingly relying on data science and artificial intelligence, but don't have visibility into the algorithms in many cases.

At Adobe Summit in Las Vegas this week, Adobe is hoping to open the AI transparency a bit.

Drew Burns, principal product marketing manager for Adobe's digital marketing unit, said the company's AI platform is absorbing CRM data, industry metrics, and individual profiles to give customers more control of the algorithms. "Customers can constrain and customize the algorithms based on what their goals are. We're giving a lot of control to marketers," said Burns.

The AI customization workflow resembles what an enterprise would use for testing. "With automated personalization and targeting capability you can test experiences and evaluate everything from the profile to attributes," said Burns.

Burns outlined the following updates:

Real-time customization to brand recommendation models. Brands using their own algorithms for product and content recommendations can use Adobe Target Premium to apply new rules, filters and weight attribute to specific audiences and individuals.
adobe-content-similarity-algo.png

Personalization insights via a new report type in Adobe Target. Adobe's Personalization Insights showcases what audience traits were influential in building the model and how they were applied. Personalization Insights launches in beta in the spring.




Propensity score model comparisons. Adobe Target will allow markets, product owners, developers, and data scientists to factor their custom propensity scores to gauge a customer's likelihood to purchase a product or churn. These comparisons will be available in June.
Read also: How to implement AI and machine learning (free PDF)

Adobe is also adding new features to Adobe Campaign to simplify and bolster email campaigns. Email remains a key cog in omnichannel marketing plans and one of the tools with the strongest enterprise returns on investment, said Kristin Naragon, director of product marketing for Adobe Campaign. "Email is a topic that never goes away. Love it or hate it the face we're still talking about it, either way, speaks to its staying power and importance in daily mindshare," said Naragon.

The updates to Adobe Campaign include:

Creative Designer, which is a tool to simplify email creation and design. Marketers can use a visual UI to drag-and-drop templates and content fragments. Each fragment is customized to the header, image, and text level. The Creative Designer is also integrated with Adobe Campaign's content library.



Behance Collection of email templates. Behance, a platform used by artists to market their work, will integrate with Adobe Campaign so email designers can use pre-built templates via Behance community members.

Scaling tools to bolster email sends, personalization, and boost delivery efficiency.3