Breaking

Saturday, June 30, 2018

6/30/2018 10:27:00 PM

Huawei 5G phone coming next year

Chinese phone maker plans "5G-ready" phone launch for mid-2019


Huawei has confirmed it is set to launch a 5G-ready smartphone next year.

Speaking at the Mobile World Congress Shanghai event, Huawei Rotating Chairman Eric Xu revealed the company is planning a range of 5G solutions in 2019 as it continues its push towards the next-generation networks.

This includes launching a 5G-ready smartphone in June 2019, following the release of its 5G-ready Kirin mobile processor early next year.

Huawei 5G phone launch

"These products will allow consumers that want higher speeds to enjoy an incredible 5G experience as soon as possible," Xu said. "Huawei is ready to work with our industry partners, to invest and to innovate, so that together we can succeed in delivering the 5G mission."

Huawei has long been one of the leaders in pushing 5G technology ahead of the worldwide expected launch of the new superfast networks in 2020.

The company revealed at Mobile World Congress 2018 that it had already developed a 5G-ready modem as part of its plans to launch a commercially-ready network by the end of 2018.

Huawei currently says it is working with more than thirty of the world’s biggest carriers to further its 5G research, including BT, EE and Vodafone in the UK, and has already spent $600m on developing the technology.


READ MORE TECHRADAR
6/30/2018 09:07:00 PM

MongoDB 4.0 aims for cloud-friendliness

While it has become fashionable for operational databases to become multi-model, MongoDB has stuck to its knitting with the document model. But in the 4.0 release, it addresses some checkboxes on widely different sides of the database spectrum.


You can't knock MongoDB for staying close to its roots. Playing a similar role in the NoSQL world as the original MySQL did with the LAMP stack, MongoDB has remained known as the operational database that is developer-friendly. As rivals are embracing multi-model approaches, MongoDB continues to believe that the best way to represent the diversity of models like key-value, graph, or text is within the document model itself. But with the 3.0 generation, MongoDB took a more enterprise-grade path starting with a higher-performance, extensible (or pluggable) storage engine that addressed its weakness with writes.

At MongoDB World this week, the company is announcing general availability of the new 4.0 release, plus several beta features relating to the cloud. With the 4.0 release this week, MongoDB has aimed to address two different constituencies: those demanding transaction support and ease of development in the cloud.

MongoDB first sprang some of the news last winter by disclosing in advance that 4.0 would take ACID. To recap, ACID support now extends across multiple documents, meaning it could be enforced across one (or more) collections. But initially, that support will be restricted to a single replica with distributed supported across sharded clusters to go live with the 4.2 dot release later in the year.

But out of the box, the 4.0 release adds a related feature for ACID consistency for making reads easier off secondary replicas with a new non-blocking secondary read capability. This addresses a weakness in availability with MongoDB's primary/secondary (a.k.a., master/slave) design. MongoDB's replica design has been intended to enforce strong consistency by default with fast failover. But that meant slower performance when reading from replicas that are being updated by the master; with the new option, non-blocking will speed reads from those replicas if you choose a more relaxed consistency model. While this does not offer the range of consistency options for cloud-native platforms like Azure Cosmos DB, it provides a step towards providing better choices.

The new release also adds another enterprise database-oriented feature of supporting basic transformations inside the database. So, if you ingest a set of data that has dates in character strings, you can convert inside MongoDB with a new $convert function that works inside the database, as opposed to requiring an external tool or manual coding. With the new feature, data transformations can be performed as part of the aggregation pipeline by simply invoking a function. And while we're on the topic of aggregation pipelines, you can now build them through drag and drop in Compass, MongoDB's GUI tool.

It also includes general availability public beta for MongoDB Charts, a new in-database visualization feature first disclosed last year that was not necessarily meant to replace Tableau. The Charts feature eliminates the need for using MongoDB's BI Connector to feed a separate SQL-based visualization tool. But the real differentiator is that it can visualize JSON documents without having to "flatten" their structure (and lose the richness of nested data) that would otherwise occur with the original SQL option.

Other features in 4.0 address cloud and mobile deployment. A year ago, MongoDB released Stitch as beta, a developer-oriented serverless compute environment available on its Atlas cloud managed service. MongoDB Stitch is now generally available. It departs from serverless offerings like AWS Lambda in that it supports stateful applications. Evidently Stitch has proven popular with over 23,000 apps written on it to date, and currently at a pace of 500 per day.

The 4.0 release includes previews of several mobile features. The first is a mobile embedded version of MongoDB that will run on smartphones, tablets, and IoT devices that will be a private beta in the 4.0 release. The other is Mobile Sync, a service that will appear in MongoDB's Stitch. We're employing future tense here, because while Stich just emerged from beta in 4.0, Mobile Sync is, in Mongo's words, "coming soon." Together, both features open the possibility of running small footprint MongoDB edge servers for wide area IoT use cases. Here, MongoDB is treading down a path already opened by rivals such as Couchbase, which has offered a mobile client platform for several years.

Last but not least is a new multi-region capability for MongoDB's Atlas managed cloud service that will let you deploy an instance that is distributed across multiple world regions. This is where the non-blocking secondary read feature mentioned above will prove critical as it will provide an option for improving global read performance. Because MongoDB lacks the multi-master capability of cloud-native data stores like Cosmos DB, DynamoDB, or Google Cloud Spanner, such a capability is vital for making this feature worthwhile. It addresses the fact that, while it can sometimes prove challenging to keep up with the Joneses, there is a significant constituency for operational databases who want to keep their cloud options open.




6/30/2018 07:10:00 PM

Google Cloud steps up storage game to court Hollywood, launches Filestore

Google Cloud Platform's Transfer Appliance is also generally available.


Google Cloud launched a new region in Los Angeles, outlined a network attached storage service and made its Transfer Appliance designed to move petabytes of data generally available.

The moves add up to an effort to target media companies and enterprises working with content creators.

Google launched a service called Cloud Filestore that can be handy for movie and production studios that have to render CGI images and move large files efficiently.

Cloud Filestore enables customers to stand up a managed network-attached storage (NAS) setup with Google Compute Engine and Kubernetes Engine instances. Filestore will be available as a storage option in the Google Cloud Platform console.

Dominic Preuss, director of product management at Google Cloud, said the expansion into LA was largely driven by customers that wanted a low latency place to run jobs and store data. Preuss noted that media and entertainment were among Google Cloud Platform's key verticals.

More: Cloud wars 2018: 6 things we learned in the first half | Google Cloud Platform breaks into leader category in Gartner's Magic Quadrant | Top cloud providers 2018: How AWS, Microsoft, Google, IBM, Oracle, Alibaba stack up | Everything you need to know about the cloud, explained

Preuss explained that render farms are one of the primary use cases for Filestore. "If you are a studio or firm working on a piece that takes two or three years you need a last rendering for the effects," he said. "You may need 20,000 to 50,000 cores and don't want to own that infrastructure. You want to run jobs and give that capacity back."

Filestore will have a premium tier for 30 cents per GB per month and a standard tier at 20 cents per GB a month. The premium instance of Filestore can provide up to 700 MB/s and 30,000 IOPS regardless of instance capacity. The goal for Google is to provide a cloud option for rendering movies--that process usually revolves around on-premises hardware, files and software.


Google Cloud's Transfer Appliance is also generally available with new features. Customers have been using the Transfer Appliance for about a year so Google Cloud could get feedback.

The service has two configurations--100TB or 480TB of raw storage. Compression rates are usually 2x raw capacity. The 100TB appliance will cost $300 with express shipping usually about $500. The 480TB model will run you $1,800 plus about $900 in shipping.


Enterprises are likely to use Google Cloud's Transfer Appliance to migrate data centers to the cloud, transferring data for analytics and moving large archives of content.




6/30/2018 04:06:00 PM

TLBleed is latest Intel CPU flaw to the surface: But don't expect it to be fixed

Researchers find a new side-channel attack against a performance-enhancing feature in Intel CPUs.



Intel won't be patching a newly revealed side-channel vulnerability in its CPUs, even though it could be used to leak encryption keys for signing a message.

The flaw, which will be presented at the Black Hat USA 2018 conference, is why OpenBSD recently decided to disable hyperthreading on Intel CPUs.

The OpenBSD project's chief, Theo de Raadt, said he dropped support for the feature after viewing the paper from researchers at the Systems and Network Security Group at Vrije Universiteit Amsterdam.

The Register reported on Friday that the paper details an attack on Intel's Hyper-Threading technology to reliably extract a 256-bit EdDSA encryption key used for cryptographically signing data.

The researchers argue that their attack, dubbed TLBleed, is able to leak the keys from another program in no less than 98 percent of tests, depending on the Intel CPU architecture. The leak happens when the key is being used to sign data.

As the attack relies on Intel's Hyper-Threading, this side-channel flaw differs from Spectre and Meltdown, which exploit speculative execution. Intel's Hyper-Threading technology is available on Intel Core, Core vPro, Core M, and Xeon processors.

In a publicly available summary, the researchers note that the side-channel attack leaks information from the Translation Lookaside Buffer (TLB), a special type of memory cache that stores recent translations that map virtual to physical memory addresses.

If Hyper-Threading is enabled, a single core can execute multiple threads simultaneously for performance gains, but that core also shares the same memory caches and TLB.

The attack makes it possible for one thread to see how another accesses the CPU through TLB and use this information to work out secrets from another program stored in shared RAM.

"Our TLBleed exploit successfully leaks a 256-bit EdDSA key from cryptographic signing code, which would be safe from cache attacks with cache isolation turned on, but would no longer be safe with TLBleed. We achieve a 98 percent success rate after just a single observation of signing operation on a co-resident hyperthread and just 17 seconds of analysis time."

The researchers say their attack is able to extract this key while a program is signing a message with the libgcrypt cryptographic library.

However, to exploit the flaw, an attacker would already need to have malware running on a target system or be logged in. But the vulnerability could pose a threat to virtual machines on a public cloud, which could be exploited from another instance on the same machine.

Intel appears unlikely to patch the bug and did not award the researchers payment under its side-channel bug bounty. The company has said its cache attack protections are sufficient to block TLBleed attacks.

However, Ban Gras, one of the researchers behind TLBleed, said in a tweet that the attack shows that cache side-channel protections, such as cash isolation, are not enough.

Intel told ZDNet that it had been made aware of the Vrije Universiteit research and TLBleed, which it stressed is unrelated to Spectre or Meltdown.

"Research on side-channel analysis methods often focuses on manipulating and measuring the characteristics (eg, timing) of shared hardware resources. These measurements can potentially allow researchers to extract information about the software and related data," Intel said in a statement.

"Software or software libraries such as Intel Integrated Performance Primitives Cryptography version U3.1, written to ensure constant execution time and data independent cache traces, should be immune to TLBleed."TLBleed is latest Intel CPU flaw to surface: But don't expect it to be fixed

Researchers find a new side-channel attack against a performance-enhancing feature in Intel CPUs.

Intel won't be patching a newly revealed side-channel vulnerability in its CPUs, even though it could be used to leak encryption keys for signing a message.

The flaw, which will be presented at the Black Hat USA 2018 conference, is why OpenBSD recently decided to disable hyperthreading on Intel CPUs.

The OpenBSD project's chief, Theo de Raadt, said he dropped support for the feature after viewing the paper from researchers at the Systems and Network Security Group at Vrije Universiteit Amsterdam.

The Register reported on Friday that the paper details an attack on Intel's Hyper-Threading technology to reliably extract a 256-bit EdDSA encryption key used for cryptographically signing data.

The researchers argue that their attack, dubbed TLBleed, is able to leak the keys from another program in no less than 98 percent of tests, depending on the Intel CPU architecture. The leak happens when the key is being used to sign data.

As the attack relies on Intel's Hyper-Threading, this side-channel flaw differs from Spectre and Meltdown, which exploit speculative execution. Intel's Hyper-Threading technology is available on Intel Core, Core vPro, Core M, and Xeon processors.

In a publicly available summary, the researchers note that the side-channel attack leaks information from the Translation Lookaside Buffer (TLB), a special type of memory cache that stores recent translations that map virtual to physical memory addresses.

If Hyper-Threading is enabled, a single core can execute multiple threads simultaneously for performance gains, but that core also shares the same memory caches and TLB.

The attack makes it possible for one thread to see how another accesses the CPU through TLB and use this information to work out secrets from another program stored in shared RAM.

"Our TLBleed exploit successfully leaks a 256-bit EdDSA key from cryptographic signing code, which would be safe from cache attacks with cache isolation turned on, but would no longer be safe with TLBleed. We achieve a 98 percent success rate after just a single observation of signing operation on a co-resident hyperthread and just 17 seconds of analysis time."

The researchers say their attack is able to extract this key while a program is signing a message with the libgcrypt cryptographic library.

However, to exploit the flaw, an attacker would already need to have malware running on a target system or be logged in. But the vulnerability could pose a threat to virtual machines on a public cloud, which could be exploited from another instance on the same machine.

Intel appears unlikely to patch the bug and did not award the researchers payment under its side-channel bug bounty. The company has said its cache attack protections are sufficient to block TLBleed attacks.

However, Ban Gras, one of the researchers behind TLBleed, said in a tweet that the attack shows that cache side-channel protections, such as cash isolation, are not enough.

Intel told ZDNet that it had been made aware of the Vrije Universiteit research and TLBleed, which it stressed is unrelated to Spectre or Meltdown.

"Research on side-channel analysis methods often focuses on manipulating and measuring the characteristics (eg, timing) of shared hardware resources. These measurements can potentially allow researchers to extract information about the software and related data," Intel said in a statement.

"Software or software libraries such as Intel Integrated Performance Primitives Cryptography version U3.1, written to ensure constant execution time and data independent cache traces, should be immune to TLBleed."


READ MORE:-


6/30/2018 01:22:00 AM

Sony Smartwatch 4: will it ever happen?

Sony isn't making a smartwatch any time soon



Almost four years after announcing its third smartwatch, Sony has shown no intention of releasing a new Wear OS watch and instead has opted to focus its mobile business with new iterations of its phones and hearable products like the Xperia Ear Duo.

The Sony Smartwatch 3 from 2014 was for a long time one of the best smartwatches on the market sporting an attractive design, built-in GPS, waterproofing, NFC and Wi-Fi capabilities.

Most of this tech is now commonplace on smartwatches, but in 2014 this was groundbreaking stuff. 

So why hasn't Sony updated its smartwatch since then, especially when it was critically so well received? Will we ever see a Sony Smartwatch 4? We've put together everything we know down below, followed by some features we'd love to see if the company does make another smartwatch.

Cut to the chase
What is it? A new smartwatch from Sony
When is it out? Possibly never
What will it cost? Upwards of $299/£210/AU$420 - if it ever launches

Will there ever be a Smartwatch 4?

The answer is: possibly, but not yet. Sony told TechRadar back in March 2017 that its wearable ambitions have been put on hold. 

Kaz Tajima, senior vice president of creative design and product planning at Sony Mobile, told TechRadar, "There’s still not a sufficient solution for the end user from a technological point of view. With the watch you have to charge it every day, which is unnatural for a watch.

“Until we find a good, technological solution – or a form factor solution – to make these things feel natural to wear, we’ll keep looking at [the wearables sector].” 

Those points haven't stopped other manufacturers from pushing ahead in the smartwatch space, but it's clear Sony is taking a long break from the tech until it deems the tech proficient enough to jump back into.

There's clearly support for Sony's wearable devices too. For example, in 2017 there was a petition from 4,000 Sony Smartwatch 3 fans that tried to encourage Sony to update the watch to Android Wear 2.0. The company ultimately didn't see fit to support it though.

That's a shame for anyone who owns a Sony Smartwatch 3 or wanted a new device from the company, but right now the firm seems to be focusing on hearables like the Xperia Ear Duo or the concept device released the year before that.

If we ever see the Sony Smartwatch 4, it may be a long time coming. Don't give up all hope of a new wearable from Sony, but there are currently no leaks or rumors so we would expect the quotes above from Tajima to still stand for how Sony feels about smartwatches.

Sony Smartwatch 4: what we want to see


Originally we wrote a list of some features we'd like to have seen on the watch back in 2015 soon after reviewing the Smartwatch 4. 

Those aren't as relevant in the 2018 smartwatch market, but we've put together a few of the old ones with some new ideas that we'd like to see the company include if 

Sony ever decides to make a new smartwatch.

A gym and office-friendly design

The Sony Smartwatch 3 was a slick looking wearable for its time, but most would agree that it was crafted more with the sporty type in mind.

Sure, owners with a little more cash could splurge on the stainless steel strap but that made have made the cost also shoot up. Other companies have achieved a design that works for sport and fashion such as the Apple Watch 3, so we'd like to see Sony do the same thing here.

More accurate and efficient GPS

During our testing of the Smartwatch 3, we loved leaving our phones behind and taking advantage of its built-in GPS. But, on the flipside of this cool feature are a few serious downsides: the sensor's accuracy and its impact on the wearable's battery.

We originally found that the distance tracked with GPS tended to differ quite a bit with what our phones would report, sometimes to the point that we didn't know which one to believe.

Hopefully, the Smartwatch 4 would receive the hardware improvements necessary to ensure a more accurate tracking experience. Considering it's now four years later and GPS is better on a lot of smartwatches, we expect Sony could do this.

Google Pay support

Not every smartwatch has NFC or Google Pay support, but most top-end devices now have the tech inside and we'd love to see it come to a future Sony watch.

It's a bit disappointment now when high-end smartwatches don't come with the tech built-in, so hopefully Sony will see fit to include the tech if it ever makes another smartwatch.


An awesome-r battery

The 420mAh battery packed inside the Smartwatch 3 is more juice than you'll find in a lot of other smartwatches. In addition to having a higher battery capacity than its competitors, it could also last longer than most of the competition at the time. It would run for up to two days, depending on how you used it.

That said, battery life is getting much better on smartwatches now so we'd hope to see Sony improve that if it makes a next-gen device.

More sensors

The built-in GPS, gyroscope, and accelerometer on the Smartwatch 3 allowed for fairly in-depth tracking, but we'd love to see Sony push even more sensors inside the Smartwatch 4. 

Noticeably lacking was an optical heart rate sensor and altimeter, which can track your heart rate, and your altitude, respectively. This would give it the complete set of abilities we're looking for from today's smartwatches - let's hope Sony takes note.


Go cellular

A Wear OS device with Wi-Fi capabilities is like a bird with unclipped wings that's still locked in a cage. Adding cellular access to the Smartwatch 4 will allow it to operate over a cellular signal while untethered from your smartphone.

Plus, it might not be practical, or even necessary for some users, but we'll always take more features over fewer if it improves the smartwatch experience.


It's hip to be a square

Some might feel differently, but we think the Smartwatch 4 should hang onto the square design. Come on. It's rather charming, don't you think?

OK... it's a little tough to defend the form factor, especially when many other gorgeous Wear OS watches come with circular displays but a square design would allow Sony's next watch to look unique.



Friday, June 29, 2018

6/29/2018 10:15:00 PM

Xiaomi announces Redmi 6 Pro with dual rear cameras, Mi Pad 4 with Snapdragon 660 SoC

Available starting at CNY 999


Chinese smartphone maker Xiaomi has announced a new budget smartphone and a tablet, called the Redmi 6 Pro and Mi Pad 4 in China. The devices have been in the news for some time now after they were leaked, revealing their design and specifications.

The Mi Pad 4 is the successor to the popular Mi Pad 3 that was launched in China in April last year. The Redmi 6 Pro is the first Redmi device to feature a notch and we may see more Redmi devices being launched with a notch in the future.

The Mi Pad 4 has been launched in WiFi and WiFi + LTE variants and will go on sale in China from June 27, starting from CNY 1,099 for the WiFi variant and CNY 1,499 for the LTE variant. The Redmi 6 Pro will be available starting at CNY 999 starting from June 26.

Xiaomi Redmi 6 Pro Specifications


The Xiaomi Redmi 6 Pro runs on Android 8.1 Oreo with MIUI 9 skinned on top and features a 5.84-inch full HD+ 2.5D curved glass display with a resolution of 2280 x 1080 pixels and an aspect ratio of 19:9. The device has a notch at the top for the front camera and sensors.

In terms of performance, the Xiaomi Redmi 6 Pro is powered by an octa core Qualcomm Snapdragon 625 SoC coupled with an Adreno 506 GPU. In terms of memory, the device has been launched in three variants – 3GB RAM + 32GB internal storage, 4GB RAM + 32GB internal storage and 4GB RAM + 64GB internal storage.

Coming to the camera department, the Redmi 6 Pro features a dual camera setup at the back consisting of a 12MP Sony IMX486 sensor with 1.25um pixel size, f/2.2 aperture, phase detection autofocus, LED flash and a secondary 5MP Samsung S5K5E8 sensor with f/2.2 aperture and 1.12um pixel size. On the front, it sports a 5MP selfie camera.

It is powered by a 4,000mAh battery and connectivity options on the device include 4G VoLTE, Wi-Fi 802.11 a/b/g/n, Bluetooth 4.2, GPS, 3.5mm audio jack and an Infrared sensor.

Xiaomi Mi Pad 4 Specifications


The Xiaomi Mi Pad 4 also runs on Android 8.1 Oreo with MIUI 9 skinned on top. It features an 8-inch full HD+ display with a resolution of 1920 x 1200 pixels and an aspect ratio of 16:10.

In terms of performance, the Mi Pad 4 is powered by an octa-core Qualcomm Snapdragon 660 SoC coupled with Adreno 512 GPU. In terms of memory, the device will be available in two variants – 3GB RAM + 32GB internal storage and 4GB RAM + 64GB internal storage.

Coming to the camera department, the Mi Pad 4 features a single 13MP primary camera with OV13855 sensor and f/2.0 aperture. On the front, it sports a 5MP selfie camera with Samsung S5K5E8 sensor and f/2.0 aperture.

The Mi Pad 4 is powered by a 6,000mAh battery and connectivity options on the device include 4G LTE, WiFi 802.11 ac, Bluetooth 5.0, GPS, 3.5mm audio jack and a USB Type – C port.

Pricing and Availability

The Redmi 6 Pro is priced at CNY 999 for the 3GB RAM variant, CNY 1,199 for the 4GB RAM + 32GB storage variant and CNY 1,299 for the 4GB RAM + 64GB storage variant. It will be available in China starting from June 26 in Rose Gold, Gold, Blue, Black and Red colors.

The Mi Pad 4 has been priced at CNY 1,099 for the 3GB RAM variant, CNY 1,399 for the 4GB RAM variant and the LTE variant has been priced at CNY 1,499.



6/29/2018 09:26:00 PM

Next-generation VoIP: Connecting up communications silos with the cloud

Company believes the new X Series is the next step to dominance in its fully integrated VoIP strategy.


In the highly competitive telecoms market, 8x8 is hoping that its new X Series VoIP service will give it an edge. The platform combines call, collaboration, conferencing and contact center solutions. ZDNet talked to the company's UK MD, Kevin Scott-Cowell.

Scott-Cowel: We have been around since the 1980s. Originally [as Integrated Information Technology] we were a chip designer. Then we did a fantastic strategic shimmy in the '90s downturn.

We had VoIP [Voice over IP] technology and we quickly decided to go and build VoIP products and VoIP services. At first, we were largely targeting consumers but that proved to be a bit of a pain so then we moved to B2B and we have grown from there. Then around 2000/2004, we started building VoIP cloud-based services. We went for a pure cloud play and have built ever since.

In the UK, we went a similar route to our US arm. In 2004, Voicenet Solutions started life as a pure-play, voice-over-IP, business-to-business solution. I joined that company in 2010.

We met 8x8 in 2012 when they acquired some contact centre technology and we wanted to re-sell it. All of a sudden I found myself talking a lot to 8x8 and we realised we were both pretty similar in hosting pure-play VoIP.

So in December 2013, Voicenet was acquired by 8x8 and we became the first non-US company to have VoIP with global reach. We were about 50 people and we considered ourselves the first among equals at hosted voice.

Are you publicly held or private?

8x8 Inc is on the New York Stock Exchange. We used to be NASDAQ-listed but moved to the New York Stock Exchange in January.

We've been publicly listed all the way through, even at the point of transition from a chip-maker to VoIP.

So, we had a nice nucleus when we became established with 50 people, all of whom had experience running big platforms. Now we have around 240 people in the UK, but a lot of the growth has been from the acquisition of DXI [a cloud-based comms provider] in 2015.

Who do you see as the competition?

Most of the other people are re-selling other people's solutions. Many of them will often say they are supplying their own solutions but when you look closely they have other people's technology bolted on.


Scott-Cowell: Good customer experience will drive success

Photo: 8x8

And our technology is evolving. We have over 50 patents which are key for us, as it allows us to drive the development of products.

We have 15 data centres and are growing globally. We have all the technology around geo-routing and we've got a lot of the patents around this sort of stuff.

We have the technology that monitors the quality of the calls through the endpoints. There are probes on the system that look for the best routing of the calls.

We've got intelligent algorithms that keep that quality high. The Tolly Report rated us as number one in call quality. We take all that seriously -- we don't just plug it in and hope that it works. We're doing pro-active testing to manage the network.

What do you see as being your key clincher for winning a major contract?

There are a lot of smaller players who can provide good services locally but once you get outside the UK, they run out of options because they haven't got the network, they haven't got the network, they haven't got the partners.

Tell me about the X Series.

Our thinking is that we have gone through the first wave of Enterprise communications, which is on-prem stuff -- the contact center, PBX, and maybe video conferencing.

And then in around 2000 and whatever, along comes cloud and TCOs are better and flexibility is probably a bit better. What we are seeing now is that you've got lots of stuff in the cloud but actually you've got disparate systems -- silos. And within those silos, you've got a contact center but let's take chat as an example.

So, you go into businesses now and they've got a contact center, a cloud solution from one vendor, it's got a PBX from another, it's got video conferencing from another, chat from another. You've got four streams that don't necessarily integrate with one another and then you get chat everywhere.

Each different group in the business is using a different chat engine and so you've got silos. But you've also got silos within silos. You've got a whole bunch of data but it's a crazy mess. It provides function and probably some productivity, but in terms of customer service and employee experience it's a nightmare.

A customers rings in and says, "Did you get my email?" and the person on the end of the line has no idea what the customer is talking about. There is no way to look for the customer communication. The customer waits, the employee has no way of dealing with it and so the customer starts looking for a different supplier.

We know the paradigm: Good customer experience will drive success and the opposite will lead to failure.

We all know we need to move to a different paradigm and that's where X Series comes in because we've got the technology.

The bottom line is offering customers a route from silos to a single platform. Single data across all the different platforms is, when it comes down to it, what we are aiming for. X Series is the way to get there.

There is a data lake and, we believe, AI is the way to be able to use it and, more importantly, for us to help our customers get the most out of it.

We will be going ahead with the first X Series products around June or July.

What's the link that you think is missing?

The data. You need access to the data. If you haven't got the data in your cloud then you can't move forward.

You can use your system now and work the software, but the data is somewhere else. So, when the third party that the customer is integrating with is sold to a company that competes with you, suddenly the system that you have been relying on isn't there anymore.

The fact that we will have all the data, all of the software, all the analytics in one place means that we've got this huge amount of data that makes the whole thing a true customer experience.




6/29/2018 07:11:00 PM

Western Digital shakes up data storage

Western Digital, perhaps better known as WD, announces a series of products and initiatives designed to reflect the company they've become - and not the disk drive they used to be. Marketing hype? Or do they really have the goods?


While we weren't looking, WD assembled a formidable array of storage technologies, beyond their leadership in disks and SSDs. That includes all-flash arrays, scale-out object storage, and industry-leading enclosures, with patented cooling and anti-vibration technology. Today's announcements take them to another level.

Western Digital adds NVMe, flash heft to data center storage lineup

BIG DATA, FAST DATA

CPU performance hit a wall years ago. Multi-core chips have helped, but GPUs - with hundreds or thousands of execution units - have kept the industry pushing the performance envelope, especially for machine learning. So there's been a renewed focus on moving processing closer to data, as the growth of big data has made it costly to move the data to processors.

Fast SSDs, including the NVMe/PCIe ones pioneered by WD acquisition Fusion-io, are still improving in performance, capacity, and cost. But the cloud - and most enterprises - are still relying on hard drives for capacity, where WD's HGST unit has led the way with their helium filled drives.

At the same time, the growth in realtime systems, such as computer vision for self-driving cars, and streaming data analysis, have put a premium on managing data that is both fast and big. In what may be their most consequential announcement this morning, WD is now offering flexible and efficient drive enclosures with powerful built-in servers that support the Docker container framework. Moving apps to your data has never been easier or more cost-effective.

WHO IS WD?

WD, founded in 1970, has a long and varied history. It began as a chipmaker - for a time it was the world's largest calculator chip company - but it found lasting success creating hard drive controllers, especially the ATA drives popularized by the IBM PC and its many clones. In the mid-80s they bought the assets of HDD vendor Tandon and began manufacturing disks.

Over the years they became vertically integrated, acquiring a read/write head vendor, and later a disk media manufacturer, making them vertically integrated. You may have first become aware of them with their 10,000 RPM Raptor and VelociRaptor drives, a popular choice for gamers and servers.


WD started spreading its wings when they acquired HGST, which itself was formed when Hitachi bought IBM's once leading disk business. HGST continue to be known for excellent reliability, and they've led the industry in the move to helium-filled server drives.

Since then they've acquired Amplidata, an early and under-marketed leader in scale-out object storage, and most recently, Tegile, an SSD-based array vendor.

SYMBIOTIC DESIGN

WD is the only company that builds storage devices, both flash and disk, from the bit up, and integrates them into arrays with advanced software and hardware. Now they're putting all the pieces together, creating powerful systems based on optimizing each layer in the storage stack, from bits to arrays and interconnects, which they call that symbiotic design.

Another company has taken a similar, highly integrated, approach to building products. Apple's done pretty well with that, and WD is trying to do the same with storage.

THE STORAGE BITS TAKE

There was too much in the briefing I attended to easily cover in one piece. I've asked WD for additional in-depth briefings on such things as their new file-on-top-of-object storage, microwave-assisted magnetic recording, and storage fabrics.

But as a long-time participant and observer of the storage industry, I believe we're seeing the emergence of an inventive and resourceful storage systems company that is bringing real creativity and value to our data-centric world.

Courteous comments welcome, as always. I've done work over the years for several of the companies that WD has acquired and for WD itself. They also paid for my airfare to attend yesterday's briefing in Silicon Valley.




6/29/2018 04:03:00 PM

Future SD cards: Expect monster 128TB storage, plus zippier data transfers

New SD specification will bring SSD performance to the next generation of SD cards.



SD Association has outlined the card types and expected maximum performance for the new spec.

Standards body the SD Association has announced a faster data-transfer interface, dubbed SD Express, and SD Ultra Capacity, which offers a 128TB maximum capacity.

SD Express incorporates PCI Express or PCIe and Non-Volatile Memory express or NVMe, the standard software interface for PCIe SSDs.

The addition of PCIe should enable a maximum data transfer rates of 985 megabytes per second, and together with NVMe will allow SD cards to be used as removable SSDs.

SD Ultra-Capacity (SDUC) cards would be a huge jump from today's maximum SD card capacity of 2TB.

SD Express and SDUC are part of the new SD 7.0 specification unveiled at Mobile World Congress Shanghai this week.

SD Express will be initially offered on SDUC, SDXC, and SDHC memory cards. The speedier interface will enable future SD cards to support gaming and other apps on cards, 8K video capture and playback, VR, video streaming, and large images.

The SD Association has published a whitepaper detailing the specification detailing backward compatibility and requirements for maximum performance.

The new cards with SD Express will have the same shape as today's SD cards and will include the SD UHS-I interface to support legacy SD interfaces.

However, achieving maximum performance requires that a card and host support SD Express, so in reality, many of today's devices will support data transfer speeds of far less than 985MB/s.

If the host has UHS-II or USH-III pins with an SD Express card, it will enable up to 104MB/s data transfers, according to the whitepaper.



6/29/2018 09:30:00 AM

New Windows 10 preview makes tons of changes to Edge, Skype, security and more

Along with some potentially useful tweaks for laptop users



Microsoft has deployed a new preview build of Windows 10 (17704) which contains a number of changes, with a good deal of work having been done to the Edge browser, alongside improvements for the Skype app, bolstered security and more.

So let’s start with Edge, which has had its appearance smartened up with elements of Fluent Design which is slowly taking over the Windows 10 desktop interface. The browser now has a depth effect for the active tab to help highlight it more clearly. It's a subtle move, but a helpful one.

Also, there’s a new Microsoft Edge Beta logo – with the word ‘BETA’ stamped across it in not-so-subtle fashion – to remind testers that the browser they’re using isn’t a finished product and may suffer from bugs.

Aesthetics aside, the browser’s Settings menu has been rejigged to put the most commonly used items at the top of the menu, making it generally more easy to use – plus more customization options have been added so you can tailor it better for your own preferences.

Speaking of customization, Microsoft has added the ability to choose which icons show in the Edge toolbar. Finally, it’s now also possible to choose your preference as to whether media should play automatically in the browser when you visit a website with a video. That functionality was supposed to make the cut for the last preview build, although it ended up getting dropped – but it’s here now.

Skype hype

Moving on to Skype, build 17704 has added a few new calling features, including the ability to take a screenshot during a call, plus Microsoft has moved the screen sharing button to a more prominent and easily reached position. More customization is now available for those on a group call – such as deciding who is highlighted on the main call canvas – as well.

That comes alongside various interface tweaks such as making your contacts easier to access, and easier to digest to boot, thanks to a new layout. The Skype for Windows 10 client now has new customizable themes, too, as well as various other tweaks to the likes of the notification panel. In other words, there’s quite a lot of work here to make Skype a more streamlined experience.

Guarding against exploits

On the security front, under Virus & Threat Protection, there’s a new Block Suspicious Behaviors capability, which leverages Windows Defender Exploit Guard technology to keep an eye out for apps or services which are doing strange things that could be malware-related.

Also, the Windows Diagnostic Data Viewer – which shows the telemetry data Microsoft collects from your PC – has also had its interface tuned somewhat, and now allows you to view any Problem Reports that have been sent to Microsoft. In other words, the logs that detail what happened in a crash or other glitch.

Windows 10 video playback settings

Further tweaks made include the introduction of a new video playback viewing mode designed to adapt to the current ambient lighting level, and make a video clip more visible in very bright environments. That could certainly be handy for those running Windows 10 on a laptop who use the machine outdoors.

The Task Manager has also seen a nifty change in that it now presents two new columns showing the power usage of apps at the current time, and over the last two minutes, so you can see if any applications are draining your notebook’s battery excessively.

Finally, new Typing Insights detail exactly how AI has been helping you out with features like auto-correct, for those who use the virtual keyboard. For the full list of changes brought in with build 17704, check out Microsoft’s extensive blog post.

The other major point to note here isn’t an introduction, but a removal. Microsoft has ditched Sets functionality – which brings the concept of tabs from the web browser to the wider desktop interface – from Windows 10 with this preview build.

That’s potentially sad news, as this means it may not make the cut for the big Redstone 5 update due later this year (if you’re suddenly experiencing déjà vu, that’s because it was also dropped from Redstone 4).



6/29/2018 12:30:00 AM

Honor 10 will be a better gaming phone from July 30

Thanks to the GPU Turbo update


The Honor 10 is already a powerful phone, but a new feature is set to make the gaming experience even better.

Dubbed ‘GPU Turbo’, it’s a new graphics processing acceleration technology which was announced a few weeks ago. 

At the time there was little information on it or on when it would land, but we now know it will arrive for the Honor 10 as a software update on July 30 (following a closed beta on July 15), with other Honor handsets getting it later.

We also now know that GPU Turbo apparently increases the efficiency of graphics processing by 60% and reduces the energy consumption of the SoC (system on a chip) by 30%.

Better graphics for longer

This combines to apparently dramatically improve the overall user experience when gaming, with Honor devices able to achieve ‘outstanding graphics quality’ when using GPU Turbo. The feature could also apparently benefit VR and AR applications.

Given its energy-saving credentials it presumably also extends your phone’s battery life while gaming, but that hasn’t been confirmed.

Following the Honor 10, GPU Turbo will apparently land on other handsets such as the Honor View 10 and Honor 9 Lite, however, no time frame has yet been given for them. Moving forward, it sounds like this will become a standard feature on many Honor phones.




Thursday, June 28, 2018

6/28/2018 09:21:00 PM

SUSE Linux Enterprise Server takes a big step forward

SUSE will soon release the next version of SLES, SUSE Manager 3.2, and SUSE Linux Enterprise High Performance Computing 15.


doesn't get the ink that Red Hat Enterprise Linux (RHEL) or Canonical Ubuntu does, but it's still a darn fine Linux server distribution. Now, SUSE takes another step forward in the server room and data center with the mid-July release of SUSE Linux Enterprise Server (SLES) 15.

SLES 15 will be available on x86-64, ARM, IBM LinuxONE, POWER, and z Systems in mid-July. So, no matter what your preferred server architecture, SUSE can work with you.

At the same time, SUSE is presenting SLES 15 as a multimodal operating system. And what's that you ask? It's one that integrates cloud-based platforms with enterprise servers, merges containerized development with traditional development, and combines legacy applications with microservices.

SUSE does this in SLES by using a "common code base" to environments. You then add the appropriate modules for your purpose.

Some of this is just rebranding. For example, SUSE Linux Enterprise Desktop (SLED) is made up, as before, with the core legacy code base with the desktop code base. But it's more than that. For microservices, for example, you use the legacy code base with SUSE Container as a Service (CaaS) Platform, and for a private cloud, you add in the SUSE OpenStack Cloud. Using this approach, you can also easily deploy SLES on both your in-house servers and software-defined infrastructure such as the Amazon Web Services (AWS), Google Cloud Platform, and/or Microsoft Azure using the SUSE Linux Enterprise Bring-Your-Own-Subscription (BYOS) programs.

In addition, there are two other SLES versions. The first of these, SLE High-Performance Computing 15, addresses the growing needs of the HPC market with a comprehensive set of supported tools for both x86 and ARM architectures. This includes the slurm workload manager; your choice of mrsh, pdsh, and/or conman for cluster management; and ganglia for performance monitoring.

Then there's SLES for SAP Applications. This edition includes non-volatile dual in-line memory module (NVDIMM) support for disk-less databases and enhanced high availability features for IBM Power Systems. A new feature, "workload memory protection," provides an open source-based, more-scalable solution to sustain high-performance levels for SAP applications.

Put it all together and what SUSE is trying to do is create a Linux not just for servers and traditional IT roles, but for DevOps and continuous integration and development (CI/CD) as well.

Thomas Di Giacomo, SUSE CTO, explained, "As organizations around the world transform their enterprise systems to embrace modern and agile technologies, multiple infrastructures for different workloads and applications are needed. This often means integrating cloud-based platforms into enterprise systems, merging containerized development with traditional development, or combining legacy applications with microservices. To bridge traditional and software-defined infrastructure, SUSE has built a multimodal operating system -- SUSE Linux Enterprise 15."

SUSE is on to something. Stephen Belanger, IDC's senior research analyst for Computing Platforms, wrote, "Linux has become a preferred platform for the cloud and for modern cloud-native application development. It has also gained stature as a preferred development platform for most ISVs. Today Linux is widely used for hosting traditional as well as next-generation applications across bare-metal, virtual, and container-based delivery methods. SUSE Linux Enterprise comes out at the top for SAP applications, mainframes, high-performance computing, and other key Linux enterprise-centric use cases."

Finally, SUSE is also releasing the latest version of its SUSE Manager. In this edition, the program builds on its native server management skills by adding new features so you can use the same console to IoT, cloud, and Kubernetes-based container infrastructures. With SUSE Manager 3.2, SUSE promises you can manage everything from your Raspberry Pi-based Internet of Things (IoT) devices to the SUSE CaaS Platform and back again.

If you want a bridge between your old-school IT servers to the 21st century's clouds, containers, and orchestration, you must check out SUSE's new family of operating systems. You'll be glad you did.




6/28/2018 09:17:00 PM

Cloud wars 2018: 6 things we learned in the first half

The most interesting developments in the first half revolved around software-as-a-service as the infrastructure space narrowed. Nevertheless, IoT, AI, and machine learning were difference makers.


With the first half of 2018 coming to a close it's worth revisiting the cloud battle to see how the year is shaping up.

The pecking order outlined in our top cloud providers of 2018 hasn't changed, but there are clearly moving parts worth pondering.

With that take in mind, let's go through the year (so far) in cloud computing.

The field narrows dramatically. Gartner landed with its Magic Quadrant and basically whittled the IaaS market down to a big three in leadership. Not surprisingly, those three were AWS, Microsoft Azure and Google Cloud Platform, which broke into the leadership quadrant. What was surprising is that Gartner thinks only 6 cloud infrastructure players matter--AWS, Azure, Google, IBM, Alibaba Cloud and Oracle.


IBM expands its footprint. Yes, IBM was rated more of a niche player by Gartner. However, IBM did create 18 new availability zones globally in a move that may enable it to play a bit of catch up. With a focus on hybrid deployments, IBM Cloud occupies a unique space but needs to juice its as-a-service revenue growth to keep pace.

Cutting edge as a service. AWS has made its DeepLens camera available to developers. The move is an interesting testbed for machine learning on edge devices. Also, note that AWS also pushed to GA its AR platform dubbed Sumerian.

Developers, developers, developers. Microsoft and Google made their big pitches to developers with Build and I/O, respectively. And cloud was a thread throughout along with AI. Microsoft made it clear that it was an enterprise company focused on commoditizing AI -- via Azure -- and Google also targeted AI heavily while avoiding the privacy flap facing Facebook. IoT is another differentiator for AWS and Azure.

SAP and Oracle hone their cloud pitches and eye Salesforce. Oracle vs. Salesforce is not a new development. Whether it's CRM, HCM or platform, Oracle and Salesforce compete. The battle has been interesting to watch, but now there's another enterprise giant in the mix: SAP. SAP outlined SAP C/4HANA, a CRM suite that's designed to take on Salesforce. Turns out the application layer in the cloud is also interesting.

Workday goes acquisition-happy. Workday has historically focused on architectural purity and one code base. Well, that's changing. Workday acquired Adaptive Insights and Rally team and now the integration -- UX and code -- begins.

The cloud is nicely profitable. It's not exactly a newsflash that AWS accounts for nearly all of Amazon's operating profit, but Microsoft Azure also had a strong quarter. Google talked up Google Cloud but needs to cough up more data in its quarterly results. IBM's as-a-service revenue also fared well. Toss in software-as-service players like Adobe, Salesforce, Workday, and Oracle and there's something to be said for recurring revenue with a dash of lock-in.