Breaking

Thursday, March 12, 2015

3/12/2015 02:28:00 PM

Tableau 9.0 beta rolls out



Tableau will showcase beta version 9.0 of its data visualization and analysis software to a large group of customers today, touting features that include drag-and-drop analytics as well as data-preparation features that formerly had to be done outside the application.

Customers were invited to a "virtual user group" meeting this afternoon to see the latest version, which has been in beta since mid-January.

Version 9.0 also features a new analytics tab making it easier to see and use data-analysis functions; a "data interpreter" designed to figure out where column headers and data begin in an Excel spreadsheet; and one-click smart splitting of data within a column, such as data in "month-year" format.

The data splitting does not require users to enter a field separator; instead, the software figures out what the likely separator characters are.
 \
The goal, said Francois Ajenstat, Tableau's vice president of product management, is to let customers "answer questions at interactive speed." For example, while it is already possible to generate forecasts in existing versions of Tableau, the new drag-and-drop feature makes it easier -- and also allows users to select an area of a visualization with forecast to get a forecast trend just for that data.
 \
Work was done to speed performance as well as add new features.
 \
One customer-sought function not yet implemented: dynamic parameters, which would allow options for a parameter to be defined based on a data source. In a Reddit AMA (Ask Me Anything) session yesterday, Ajenstat responded: "We have been debating the use cases for dynamic parameters for a while. We wanted to understand what scenarios you needed Dynamic Parameters for. "

Another user asked for easier migration of workbook and published data sources -- "the primary 'gotcha' I see from an enterprise deployment perspective."

"Yep, I hear you on that one," Ajenstat responded. "We are going to make this easier in a future release. In the meantime, the REST API may provide a good interim solution."

In response to another question, Ajenstat said that statistical tests will be coming to Tableau, although they are not in the current beta. "This year, we are likely to see stats objects that are composable in calculations, significant improvements to trendlines and forecasting, and possibly a surprise or two," he said.
·  Tableau will now read binary data files from SPSS, SAS and R. 
·  The software supports Spark SQL, Amazon EMR and IBM BigInsights as data sources.
·  Users can now type in calculation formulas in addition to the drag-and-drop options, saving time particularly for data sets with many columns.
·  The application allows users to easily transpose data from wide to long, so that values spread across column headers can be pivoted to key-value pairs. This functionality was previously achieved with a Tableau add-in for Excel

·  Tableau created a new interface for the Server ("internally called Vizportal) ... where the developers took a look at every single screen and every single flow," Ajenstat told Reddit users




3/12/2015 02:25:00 PM

Google opens London shop as new marketing strategy

he idea is to expose even more people to the Google brand, and all its products, in an 'amusement park' atmosphere. - See more at: http://www.itnews.com/business-issues/90641/google-opens-london-shop-new-marketing-strategy#sthash.Q5uNpXtq.dpuf

Theidea is to expose even more people to the Google brand, and all its products,in an 'amusement park' atmosphere


Opening its first store-in-a-store in London this week, Google is looking to raise its already worldwide image.

That's the word from industry analysts after Google announced today that it's opening what is going to be called the Google Shop in Currys PC World, a well-known electronics store in London.

"This is about marketing, not selling," said Ezra Gottheil, an analyst with Technology Business Research. "While Apple's stores are real stores with huge volumes, this is about building the brand and exposing people to Google who don't know about all the Google offerings."

The Google shop is set up to offer customers the chance to see and try out Google's range of Android phones and tablets, Chromebook laptops and Chromecast streaming-media devices, as well as learn about how they work together, according to the company.

Store visitors also will be able to try out Google's software tools and apps, using a series of immersive features, like a Chromecast Pod that allows users to play movies and YouTube videos, as well as an immersive surround-screen installation called Portal, designed to let users seemingly fly through any part of the planet using Google Earth.

"It's more an amusement park than a shop, which is what, I think, Google intends," said Gottheil. "Google is doing a very good job with its brand, but it can always be better. You can't be too rich, too thin or have good enough marketing."

Dan Olds, an analyst with The Gabriel Consulting Group, noted that as popular as Google's products, like Android, and services, like Google Maps and Google Earth, are, there's always room for improvement.
"I think that Google sees the need to make their products even more accessible and sees the store as one method to explore," he added. "However, they have to realize that these are going to be loss leaders. It will be difficult, if not impossible, to measure the actual value of the stores to Google's bottom line... If I were them, I'd look at store traffic as the major metric. If they're getting people into the store, then it's a win."

See More : IT NEWS

3/12/2015 02:20:00 PM

Apple wants suit from battery maker A123 dismissed or moved to California

 Apple told the court that the witnesses and records in the case are in California

Apple told the court that the witnesses and records in the case are in California - See more at: http://www.itnews.com/legal/90655/apple-wants-suit-battery-maker-a123-dismissed-or-moved-california?page=0,1#sthash.NvJlwG6b.dpuf

A bid by Apple to settle an employee poaching lawsuit with battery maker A123 Systems appears to have been unsuccessful, with the iPhone maker now asking the court to dismiss the suit or transfer it to its home turf in California.

A123 had in a suit filed in a federal court in Massachusetts charged Apple with poaching five of its employees to set up a new battery division, which was seen to give credence to reports that Apple was planning to get into the electric car business.

As the case arises out of Apple's employment of several former A123 Systems employees to work in California, all of the relevant witnesses and records are in that state, and the alleged wrongful acts also occurred there, Apple said in a filing Wednesday. Apple has asked that the case be transferred to the U.S. District Court for the Northern District of California.

A123 has its principal place of business in Michigan, not Massachusetts, according to the filing.

Apple could not be immediately reached for comment on the status of the settlement talks. The company had filed a motion last week that asked the court for time until Wednesday to file its responses to A123's motions as it and the five engineers charged were exploring "a potential resolution of this matter." The new filing did not give an update on those talks.

On Tuesday, Apple asked the U.S. District Court for the District of Massachusetts to dismiss the A123 suit, claiming that the charges are based on an "incorrect and unsupportable theory." In its motion for transfer of the case to California, the company said that the court should transfer the case only if its motion to dismiss is denied in whole or in part.

The complaint admits that Apple and A123 do not compete, as Apple is a consumer electronics company that develops and purchases batteries for use in its products, while A123 manufactures and sells batteries to commercial and industrial customers, according to the Tuesday filing.

"The simple fact Mr. Ijaz allegedly contacted a third party who collaborated with A123, without more, is far from enough to support the broad conclusion that a consumer electronics company is entering into the market to sell commercial or industrial batteries," said Apple while questioning A123's claim that the employees had breached the non-compete provisions of their contracts.

The former employee Mujeeb Ijaz was charged in A123's complaint with trying to solicit a partner of A123, called SiNode Systems, on behalf of Apple.

Apple's possible entry into electric cars came up again during its shareholder meeting on Tuesday. Quite frankly, Id like to see you guys buy Tesla, an investor told Apple CEO Tim Cook, according to reports. The executive said that Apple had no relationship with the electric car company, though he would have liked it to use its CarPlay dashboard interface technology for the iPhone.

See More :- IT News

Wednesday, March 11, 2015

3/11/2015 08:26:00 PM

Facebook nets billions in savings from Open Compute Project

Facebook reaps edges and bucks in optimizations to information center, software, and network


Facebook, by adhering to the Open figure Project it supported in Oct 2011, has saved quite $2 billion over the past 3 years, a corporation official aforesaid Tues at the Open figure Summit Conference in geographical region.

The Open figure Project (OCP) began as a shot to scale back Facebook's hardware prices, and since then, vice chairman of Engineering Jay Parikh aforesaid the corporate has tracked  quite $2 billion in savings via optimizations to its information center, software, and network. “The bottom line for USA is truly pretty giant,” with the corporate functioning on potency as a primary principal, Parikh aforesaid.

In the past year, styles compliant with OCP have made enough energy savings to power eighty,000 homes for a year, per Facebook. Carbon emission reductions are regarding four hundred,000 metric tons, reminiscent of taking ninety five,000 cars of the road for a year as a results of OCP-related measures.

“This stuff very will matter once you have confidence this optimisation.” investment OCP, Facebook gets flexibility and saves cash and energy in building out infrastructure, per Parikh.

The project has been supposed to supply a lot of economical server, storage, and information center hardware styles, during a model mimicking the open supply package movement. Facebook has contributed ideas {and styles|and styles} to OCP starting from mechanical and electrical systems within the information center to server designs, Parikh aforesaid. the corporate created many announcements Tues associated with the project, as well as the Yosemite Falls SoC figure server it's been functioning on with Intel, supposed to dramatically increase speed whereas lowering the price of serving Facebook traffic.

The company additionally projected a specification for its top-of-rack network switch, Wedge. Facebook is functioning with the likes of Accton and Broadcom on a Wedge product for the Open figure Project community, with Accton to ship Wedge within the half of this year.

Major backers -- as well as Intel, Hewlett-Packard, Microsoft, and Canonical -- area unit showing at this week’s conference.

More Info :- InfoWorld
3/11/2015 07:58:00 PM

What to do when IT is broken

An IT professional tackles deep system problems at one company -- then realizes the worst issues area unit skin deep


As IT pros, we all know a way to fix advancement and systems. sadly, our troubleshooting skills do not continually apply to senior execs stuck in their ototoxic ways in which.

I learned this lesson throughout my time at a corporation with a negative company culture. ahead of time whereas researching the position, i would detected from a fan United Nations agency worked there that it had been a decent organization. That might've been true for the one that referred American state, however I known quickly it had been not the case for those folks within the technology department.

For example, upon arrival on my initial day, i used to be alleged to participate in new worker orientation, however nobody was obtainable for the task. Instead, they were all fighting fires, thus to talk -- one among the vital systems had failing. because the day went on, I discovered the corporate had lots additional problems with its technology infrastructure. I nearly needed to run out and faux I hadn’t accepted the supply.

I decided to check what future day would bring, however it had been abundant of the same: vital systems issues, with no time to try to to something apart from tend to emergencies. Organization and designing were nearly nonexistent.
Cautiously optimistic

After time period, i used to be regarding able to walk -- and that i would have, had I not been promoted by the federation. I’m unsure why I signed on. maybe i believed I might create changes for the higher.

In any case, once the promotion i used to be responsible of overseeing all the problems that popped up each day. Senior management -- those at the terribly prime United Nations agency haven't any plan what is extremely happening -- needed all the issues to be fastened in 3 months, that was a completely impossible expectation.

Over the preceding 5 years, the corporate had neglected its infrastructure and unbroken building on prime of what it had. It finished up with lots of systems that weren’t updated or maintained, that then cascaded into a myriad of problems. One coworker explained it best once he same that operating there was like acting on a automobile that was presently being driven.

Due to management's unreasonable expectations, few technical school folks (particularly senior members) stuck around. I noticed  from employment records that a majority of the techs notched solely a year of service. a number of the larger comes might find yourself with many completely different leads, and since folks didn’t stay in place, no one took responsibility for the long goals (“it’s not my downside anymore”). As a result, the infrastructure was a large mess.

I realized that to essentially fix the problems, I had to rally my team and create repairs. With lots of overtime and just about ignoring something the senior execs threw our method, we tend to eventually stable the infrastructure once nearly 2 years of labor.
They take and that they take

I was happy with the work we’d done, however despite this accomplishment, senior management still hadn’t modified. They weren’t fascinated by our work with the technology and unbroken exacting the not possible.

Even worse was however they treated the technical school department -- as if we tend to were property. They didn’t offer abundant credit to our team, and although we tend to told them the explanation for the long challenges, they still tried to squeeze our cluster, therefore perpetuating the matter. They didn’t pay abundant within the initial place, and matched with the poor treatment, the worker turnover continuing its churn.

My 2 most senior workers quit, and that i followed. Some issues merely can’t be fastened with technical school power alone.

More Info :- InfoWorld
3/11/2015 07:50:00 PM

The cloud is full of zombies, but that's OK

Zombie VMs comprise 0.5 the general public cloud, that enterprise IT has to embrace


Microsoft needs you to believe that Amazon net Services is "a bridge to obscurity," however nothing might be beyond the reality. In fact, as Gartner says, "New stuff [workloads] tends to travel to the general public cloud ... and new stuff is just growing faster" than ancient|the normal|the standard} workloads that presently feed the information center.

Most of that "new stuff" is heading for AWS, though Microsoft Azure is associate more and more credible play.

In fact, each mirror the fact that the longer term belongs to the general public cloud. this can be partially a matter of worth, as Actuate's Bernard Golden posits, however it's largely a matter of flexibility and convenience. whereas the convenience could cause lots of waste within the style of unused VMs, it is a necessary evil on the road to putting together the longer term.
Public cloud: massive and obtaining larger

The number at that analysts currently peg the worth of Amazon net Services has reached a walloping $50 billion. that is a tremendous figure, associated it's braced by an estimate that AWS can generate $20 billion in annual revenue by 2020, up from roughly $5 billion in 2014.

We've had doubters hate on such prognostications before, and they have been wrong -- each single time.

Clearly, there is associate industrywide, tectonic shift toward the dimensions and convenience of public cloud computing, as Gartner analyst Thomas Bittman's analysis shows.
VMs running within the public cloud Gartner

According to Gartner, the quantity of VMs running within the public cloud tripled from 2011 to 2014.

What's clear from these charts is that, overall, the quantity of active VMs has tripled, as has the quantity of personal cloud VMs -- grand.

But way more spectacular is that the groundswell for VMs running within the public cloud. As Bittman highlights, "The range of active VMs within the public cloud has accrued by an element of twenty. Public cloud IaaS currently accounts for concerning twenty % of all VMs – and there area unit currently roughly sixfold additional active VMs within the public cloud than in on-premises personal clouds."

In alternative words, the personal cloud is growing at an affordable clip, however the general public cloud is growing at a torrid pace.
A false number?

Of course, a major chunk of that public cloud growth is vapor. As Bittman notes, "Lifecycle management and governance for VMs within the public cloud aren't nearly as rigorous as management and governance in on-premises personal clouds," resulting in thirty to fifty % of public cloud VMs being "zombies," or VMs that area unit procured however not used.

That range could also be generous. In my very own conversations with a spread of enterprises giant and tiny, I've seen VM waste as high as eighty %.

Not that this can be a lot of of a surprise to knowledge center professionals. per McKinsey estimates, knowledge center utilization stands at a sorry half-dozen %. whereas Gartner provides hope -- estimating utilization at twelve % -- this still speaks of terrible inefficiencies in hardware use.

In alternative words, there is perpetually a good quantity of waste in IT, whether or not it's running publicly or personal clouds or in ancient knowledge centers. Yes, there area unit tools like Cloudyn to assist track actual cloud usage. Even AWS, that on paper stands to lose revenue if customers shut down thirty to fifty % of unused capability, has its CloudWatch observation service to assist its customers avoid waste. however that won't extremely the purpose.
Inventing the longer term

The reality is that public cloud has exploded in quality as a result of it's serving to enterprises remodel their businesses. The terribly convenience that produces it really easy for developers to spin up new server instances results in the chance of forgetting they are running once succeeding project comes on.

This is a strength, not a weakness, of the general public cloud. As Matt Wood, AWS head of information science, told Pine Tree State in associate interview recently:

 those who exit and get pricey infrastructure notice that the matter scope and domain shift extremely quickly. By the time they get around to responsive the first question, the business has captive on. you wish associate setting that's versatile and permits you to quickly answer ever-changing massive knowledge needs. Your resource combine is frequently evolving; if you purchase infrastructure it's soon orthogonal to your business as a result of it's frozen in time. It's resolution a tangle you'll not have or care concerning to any extent further.

Sure, it might be cheaper to stop working unused VMs. however within the rush to create the longer term, it may be pricey to create the hassle. Back to Bittman, UN agency characterizes public vs. personal cloud workloads as follows:

    Public cloud VMs area unit way more probably to be used for horizontally scalable , cloud-friendly, short instances, whereas personal cloud tends to own way more vertically scalable , traditional, long-run instances. There area unit definitely samples of new cloud-friendly instances in camera clouds, and samples of ancient workloads migrated to public cloud IaaS, however those are not the norm. New stuff tends to travel to the general public cloud, whereas doing recent stuff in new ways that tends to travel to personal clouds.

Pay attention thereto last line, as a result of it is the clearest indication why each company has to invest heavily within the public cloud, and why personal cloud feels to Pine Tree State sort of a short makeshift. Yes, there could also be workloads that these days feel inappropriate for the general public cloud. however they will not last.

More Info :- InfoWorld
3/11/2015 07:43:00 PM

VDI on Hyper-V gets easier

Two major VDI bottlenecks have control back VDI adoption however third parties square measure serving to clear the manner


Microsoft's Hyper-V, per IDC's estimates, has grownup to capture nearly simple fraction of the hypervisor market. That level of adoption isn't any surprise thereto admins World Health Organization are looking as every iteration of Hyper-V has reduced the feature gap with vSphere, VMware's competitor.
android malware

However, there's still area for improvement. That improvement might not return from among Microsoft however from third-party scheme product that bolt on to your existing Hyper-V framework.

Case in point: Virtual desktop infrastructure (VDI) will facilitate grow the virtualization market, still as Hyper-V's market share. VDI has been a hot topic for quite it slow, however there square measure readying challenges around complexness and application delivery. Recently Mark Lockwood, a hunt director at Gartner, wrote that the largest bottleneck for VDI was shared storage. however he noted the bottleneck is being eliminated through new choices like all-flash arrays, deduplication, and hyperconverged systems.

Solving that issue is nice, however there square measure new bottlenecks to affect. One is what Lockwood calls "mass events, like antivirus scans and updates, inventory scans, and package distribution." These currently contend with network, CPU, and memory resources. once you have two hundred virtual desktops on identical server all obtaining a 100MB hotfix, patch, or update, the performance hit offers VDI a brand new black eye although the storage bottleneck has been addressed .

The key to resolve these "mass event" problems, Lockwood says, is to specialise in other ways to deliver applications (and updates and patches).

Once each major bottlenecks are cleared, VDI can become the maximum amount a norm to organizations as virtualization itself is these days to server rooms and knowledge centers.

One merchandiser addressing the "mass events" issue around application delivery is Unidesk, that offers application layering and Windows OS layering (which facilitate address the reparation issue).

Microsoft is actively promoting such third-party product. That shows the proper spirit among Microsoft ANd AN understanding that it desires an scheme to support and add options to its base giving. After all, it's not possible to create it all and specialise in each pain purpose. a lot of partners are required to create VDI as straightforward because it ought to be. Keep 'em coming!

More Info :- InfoWorld
3/11/2015 10:58:00 AM

JavaScript goes native for iOS, Android, and Windows Phone apps

NativeScript development tool uses JavaScript and Typescript to build native apps for iOS, Android, and Windows Phone.

NativeScript, a Telerik technology for building multiplatform native mobile apps from a single code base, is set to go to a 1.0 release in late April. Telerik is launching a beta program this week for the open source NativeScript.

The NativeScript website and GitHub page describe the runtime as enabling developers to use JavaScript and TypeScript to build native apps for iOS, Android, and Windows Phone (via the Windows Universal strategy) and share code across the platforms. "Developers who have that Web skill set who want to build truly native applications should be really excited because there is now a way for them to do that" without having to learn custom languages or frameworks, said Telerik Vice President Todd Anglin.
NativeScript produces apps that have a native UI, Anglin said. "That is, the app is not HTML-rendered in a Web view -- as you get with hybrid apps or traditional browser apps. ... [NativeScript enables] the underlying JavaScript engines on iOS, Android, and Windows to control a native UI layer."

Developers use NativeScript libraries, which abstract away the differences between the native platforms; they also use CSS and ECMAScript 5. The Node.js server-side JavaScript platform "sort of [helps] play that JavaScript engine role that powers the mobile app," Anglin said.

NativeScript provides full access to the native platform API, and it features a prepackaged JavaScript virtual machine; JavaScript written for a NativeScript app still runs as JavaScript on a device. "NativeScript will execute this JavaScript on the native JavaScript virtual machines provided by iOS (JavaScriptCore), Android (V8), and Windows (Chakra)," Anglin said. NativeScript provides "a JavaScript proxy that exposes all of the underlying native iOS/Android/Windows APIs to these JavaScript engines, thereby giving full control to JavaScript to control native device capabilities."

NativeScript also handles the cross-platform native UI, providing a markup language that gets parsed in to platform-specific UI widgets when an application is built. "For example, when a developer adds a button to an app, NativeScript will automatically use the appropriate native button UI control from iOS, Android, and Windows."

Anglin sees NativeScript as being different from other mobile development technologies, such as Appcelerator Titanium, which also purports to enable the building of native mobile apps via JavaScript. Titanium has too much customization, making it proprietary, Anglin contends.

"The big distinction between a NativeScript app and a hybrid app [such as PhoneGap or Sencha] is that NativeScript does not rely on the browser/Web UI layer to render the app. It renders a native UI, independent of the browser. ... That browser/Web UI layer that is usually the performance bottleneck in mobile apps that want to have buttery smooth animations and scrolling.

Read More Info :- InfoWorld
3/11/2015 10:55:00 AM

Let's started with PowerShell: The basics

Learn how to perform simple tasks and make your way around the syntax.

Are you a Windows administrator? Did you make a new year's resolution to learn PowerShell in 2015? If so, you have come to the right place.

In this piece, I will get you started by orienting you to the world of PowerShell, helping you get your bearings and showing you how to perform simple tasks with the language so that you have a solid foundation on which to add skills for your particular job. Let's get started.

Get-Basics

PowerShell uses a consistent syntax for all of its commands -- in fact, PowerShell commands are actually called cmdlets, because they’re much more than simple DOS-style actions. All cmdlets use the following syntax:
Verb-Noun
You can easily remember it as "do something to" "this thing." For example, here are three actual cmdlets:
  • Set-ExecutionPolicy
  • Get-Service
  • Show-Command
All cmdlets will always follow this format. Using these three, you will set an execution policy, show a command or list of commands and get some information about a service and what it can do.
There are a few things to remember about using PowerShell at any time:
  • PowerShell is case-insensitive. UPPERCASE, lowercase, cAmElCaSe -- it doesn't matter. PowerShell simply reads the text in and performs the action you want.
  • Since PowerShell cmdlets are always consistently formatted, you can chain those cmdlets and their output together and do things in sequence. For example, one cmdlet can retrieve a list of things, and you can send that list (the output from that first command) to a second command, which then does things too. This can go on and on and on as long as you need it to until whatever task you want is complete.
  • The output of a PowerShell cmdlet is always a .NET object. This might not mean a lot to you right now, especially if you are not a programmer or don't have a software development background, but you will find as you learn more about the PowerShell language that this is where some of the real power in PowerShell lies.                                                                                                       
More Info :- InfoWorld
3/11/2015 10:52:00 AM

Crating Wireshark work for high-speed networks

To keep up with today’s big and complex networks, traditional packet capture tools need a Small help.

Capturing packets, or sniffing them from networks using relatively lightweight probes and monitoring tools, has long been one of the most common ways to uncover issues on the network. A lot of these tools are still free and widely supported, such as Wireshark, but they might not get to the root cause of issues today as effectively as they did in the past.

The reason is the sheer volume of data produced by today’s complex physical and virtual network architectures. High-speed packet capture from 10G or 40G connections (with 100G lines now looming on the horizon), being able to store the packets effectively, and sifting through all of that data with any kind of fidelity pose enormous challenges. You can still find out anything that happened on the network if you have the packets. But the haystacks have never been bigger, and the needles have never been better at hiding.

Traditional packet capture tools cannot keep up unless you know exactly where to look. In today’s high-speed networks, relying only on traditional packet capture would be like using a scalpel to cut down a tree. What you really need is a chain saw to get the tree down first; then, if you’re looking for more precision, you’d use the scalpel. These days, even seconds of packet capture can generate millions of packets that will be meaningless unless you can get to the packets you need quickly.

The key is using a funnel approach. That is, start by monitoring the bigger picture of user experience and response times through a combination of flow data and passive monitoring on taps or SPAN (Switched Port Analyzer) ports. Find out if you have specific users, links, or applications that are consuming more bandwidth and crowding out others. For the delays, you can decode the TCP/IP conversations to provide the composition of delay. Metrics such as server delay, retransmission delay, connection setup delay, and payload transfer delay will help you decide where to look. As you work down the funnel, you’ll learn where you need to capture and analyze at the packet level.

To take a real-world example, think of the microbursts that arise from high-frequency transactions, such as in the wake of the release of market data to financial trading institutions. In this case, you might find that trades are not being executed as expected, yet bandwidth utilization from flows and TCP/IP conversations looks perfectly normal. This is where packet capture comes in. If you drill down into your packet capture engine and view the packets that show the microbursts at millisecond levels, you’ll be able to see the wall of saturation where trading has stopped. Then you'll know you need to upgrade your multiple 40G links to a 100G link.

A smart combination of monitoring and analysis tools will provide you with the context needed to determine whether or not a packet capture is warranted, then will help you get to the exact slice of data you need in that context for analysis.

In some cases, an “on demand” approach is sufficient, such as in a branch office location or for mobile users where capturing all the packets all the time for every user might not be worth it. With an intelligent packet capture approach, you can configure alerts on slow application responses that trigger a packet capture to identify the root cause of the issue. You could have mobile users in delivery trucks or even law enforcement personnel who suddenly experience slow Web apps because someone at headquarters updated the news feed with a picture of a newborn baby or the employee of the month. The size of the JPEG image could instantly clobber the performance of apps used in the branch or on the road, but you might not catch it without doing a packet capture.

In other cases -- such as for the mission-critical apps that run your business, where slow time is like downtime because you’re losing thousands of dollars by the second -- having a complete history of packet captures available for inspection might make more sense. The importance of keeping packet data was highlighted by the recent Heartbleed security issue, where the key to knowing if your data was exposed to hackers was having the history of packets to inspect at a later date.

You cannot diagnose and fix what you don’t know. Ultimately, the truth about your network resides in the packets. But in today’s high-speed networks, making effective use of packet analysis requires both chain saws and scalpels. By combining higher-level monitoring and analysis solutions with intelligent packet capture, you can continue to use packet analysis to unearth the causes of poor network performance. After all, slow is the new down.

More Info :- InfoWorld
3/11/2015 10:50:00 AM

Real wireless charg might finally be here

New approach from Wifi-Charge based on infrared beams truly delivers power at a distance.

We've been promised wireless charging for years, but it's not really wireless. You still need a physical connection by resting your device's compatible skin or case onto a charging mat, which is marginally more convenient, perhaps, than a normal plug.

So I was happy to see a demonstration of real wireless power unveiled yesterday by a company called Wi-Charge. The technology has a H.G. Wells retro vibe, but it works -- and it might deliver on the promise of wireless charging.
Wi-Charge's approach combines two technologies to do its thing, which is beam power to devices from as far as 30 feet away: infrared lasers and retroreflector mirrors. No, it won't fry whatever gets in the way of the beam. Yes, you can put your device almost anywhere and still receive the charging beam.

The new power source for mobile devices and the Internet of things?

Wi-Charge, which is now seeking hardware manufacturers to license its technology and get it into products in 2016 or 2017, hopes that mobile devices, wireless stereos, and Internet of things/home-automation devices from smoke detectors to cameras will adopt the technology.

I'd love it if my smoke detectors used this technology, so I'd no longer get that in-the-middle-of-the-night chirping when the battery is low; with Wi-Charge, the (rechargeable) batteries would never get low. The idea of smartphones automatically recharging when in range of a beam is also very appealing. (Thus the company name, a play on "why charge?" in the sense of no longer plugging into a device to charge it.)

As we add more electronic gadgets to our homes, in places without power outlets, the appeal of this technology is strong. In Wi-Charge's approach, people would need wall-mounted chargers to power devices on walls or ceilings, and ceiling-mounted chargers to power items on desks and tables to deliver the power they need.

Recognizing that people will not tear out drywall and hire an electrician to run new electrical wiring to such chargers, the company is working on chargers that are also LED light bulbs, so you can install the ceiling versions in ceiling cans and other such lighting fixtures as a bulb replacement. The wall-mounted chargers would likely plug into existing wall outlets.
Wi-Charge charger bulb 
This artist's rendering shows how a Wi-Charge infrared power transmitter could be shaped like a lightbulb to work in existing fixtures; it would also include an LED lamp for lighting.

How Wi-Charge's power beam technology works

A normal laser works by bouncing light between two mirrors, which both focuses the light particles into beams and concentrates their power. The beam is then allowed through a small opening to aim at its target. Wi-Charge puts one mirror in the charger and one on the device itself. The the beam automatically forms as infrared light emitted from the charger's "lightbulb" bounces between them. As soon as an object gets in the way, the bouncing of the light particles stops, and the beam shuts down instantly.

Because the beam is infrared, not visible light, there's no danger of blinding people or animals who look into it, nor of melting plastic or flesh that gets in the way, nor do you see distracting light beams in the room. Yet the beam can deliver up to 10W of power -- enough to power several smartphones or other small devices.
I put my hand into such a beam and didn't even feel a slight temperature rise. Yet doing so immediately cut off the power to the wireless radio the beam was powering. When I removed my hand, the power delivery resumed in less than a second.

You might wonder how the beam travels as the device moves -- after all, a normal laser has two fixed mirrors, and they must be precisely aligned for the light to bounce between them to create the laser.
The answer is what Wi-Charge calls a distributed resonator. It involves a pair of retroreflectors, which is a type of mirror that always bounces the light back to where it came from, essentially self-aligning if you have one on each side. The Wi-Charge charger and receivers do. (Wi-Charge's website offers a more detailed explanation.)

Not only does the use of retroreflectors mean the receiver can move and its beam follows along, you can also tilt the receiver to about 40 degrees and still keep a line-of-sight connection to the charger. That's important -- a line of sight is required for this technology to work. Anything that gets in the way of the beam shuts it down by blocking the laser-creating bouncing.

Wi-Charge wireless power example setup Wi-Charge
A Wi-Charge charger can establish an infrared charging beam between itself and multiple devices, as long as they have an unobstructed view of each other. The company envisions rooms having ceiling- and/or wall-mounted units to cover most devices' locations. With the technology's self-aligning beams, devices can be charged even if you move them.

Wi-Charge showed me a few receivers with a retroreflector of about 10mm (0.4 inch) square -- that was fine on a wireless radio but a bit awkward in a smartphone case. The company says such retroflections can be as small as 5mm (0.2 inch) square and be placed under the glass of a smartphone. If so, this technology might be plausibly embedded in a range of devices.

One caveat: It's unclear how much power the Wi-Charge technology will need to beam sufficient power to devices, but the fact that the company is trying to embed it in standard lightbulbs is a good sign. Still, the chargers must always be on, to be ready for any device receiver that comes in range, so your total power usage might go up somewhat. By contrast, modern wired wall chargers and induction-charging "wireless" mats now typically have auto-off circuits when nothing is plugged in.

Still, I'm intrigued. I hope a few manufacturers give the technology a real workout in their labs to see if it can be commercialized and live up to its promise.

More Info :- InfoWorld
3/11/2015 10:45:00 AM

Developers turning to less-troublesome JavaScript variants

CoffeeScript's climb in language index means developers want more choices beyond JavaScript.

CoffeeScript, a preprocessor that takes code and compiles it to JavaScript, is creeping forward in a prominent language popularity index -- a sign that developers want alternatives to JavaScript.

This month’s Tiobe Index of language popularity has CoffeeScript entering its top 100 languages for the first time, ranked 64th, albeit with a rating of less than 1 percent, like most of the languages featured in the index. Tiobe’s index gauges language popularity based on a formula assessing searches on specific languages in a variety of search engines.

“The surge for CoffeeScript was expected already some time ago,” said Paul Jansen, managing director of Tiobe, in an email. “Now that everybody is forced to use JavaScript and it is very easy to shoot yourself in the foot with JavaScript, [the] industry is looking for alternatives. These are Dart, CoffeeScript, TypeScript, and many others.”

JavaScript, ranked seventh in this month’s index, is a staple of Web development. Dart is ranked close behind CoffeeScript at 66th place. TypeScript does not register on this month’s index. But that could change soon, given a recent partnership between Google and Microsoft that heavily leverages TypeScript, which is Microsoft’s answer to JavaScript.

This month’s index also has F#, Microsoft’s functional programming language, reaching an all-time-high ranking of 11th place, with a share of 0.29 percent. It was ranked 12th in last month’s index and was seen as on its way to the top 10 a year ago. That still has not happened. But interest nonetheless is growing in functional languages such as F# and Scala. “F# has the luck that it is part of Microsoft's Visual Studio ecosystem, so it is easier to accept as a solution by industry,” Jansen said.

Elsewhere in the index, C is tops again, with a 16.64 rating, followed by Java (15.58 percent), Objective-C (6.69 percent), C++ (6.64 percent), and C# (4.92 percent). The alternative PyPL Popularity of Language Index, which assesses searches on language tutorials in Google, has Java in its top spot with a 24.3 percent share, followed by PHP (11.4 percent), Python (10.7 percent), C# (8.8 percent), and C ++ (8 percent).

The two indexes part ways when it comes to Apple’s Swift language, introduced last June. PyPL has it in 11th place, with a 2.7 percent share, while Tiobe ranks it 23rd, with a rating of 0.82 percent. It was ranked 16th in the Tiobe index last July. Still, Jansen sees good things for it. “Usually if one of the big software companies announce a new programming language it will hit the top 20 the first few months [due to hype].

After that, it will drop and then the most important phase starts: adoption,” he said. “This is a very gradual process. The fact that Swift is at position 23 is a good sign. This means that adoption is taken place and we can expect Swift to be back in the top 20 soon.”

More Info :- InfoWorld
3/11/2015 10:42:00 AM

BitTorrent delivers cloud-free 'Dropbox for business'

New BitTorrent Sync 2.0 aims for business users with user-management.

BitTorrent originated as a file sharing and distributed download technology, powering downloads of content both legitimate (such as Linux ISOs) and not (Taylor Swift albums).

With BitTorrent Sync, the technology's creators have turned to a new use case: a decentralized substitute for file sync-and-share services like Dropbox. The new BitTorrent Sync 2.0 ups the ante by providing a "pro" tier, with what BitTorrent describes as "additional functionality for business workgroups and individuals that need more capabilities and controls from Sync."
BitTorrent Sync itself, even in its nonpro incarnation, is a handy little tool. Install it on two or more devices -- Windows, Mac OS X, Linux, iOS, Android, or Windows Phone -- and you can elect to synchronize up to 10 folders among those machines. The synchronization process -- the actual shuttling of data -- is done entirely peer-to-peer. Any folder can be synced, even those on a removable drive. Versions of Sync are also available for many popular NAS devices from Seagate, Western Digital, Netgear, and others, allowing content on those devices to be synced.
BitTorrent Sync
BitTorrent Sync lets you protect folders and files with user-level permissions and set shared links to expire after a specified number of days. The QR Code option is for syncing to a mobile device.
Setting up a folder to sync involves passing an alphanumeric “secret key” between the peer machines. Aside from copying the key as text or emailing it, you can also scan a QR code if you’re setting up sync on the mobile-app version of the program.

Once that's set up, the whole sync process is more or less automatic and silent, in much the same way as Dropbox itself. The main limitation is the number of separate folders that can be synced, but if you follow the Dropbox model and have everything to sync in one folder anyway, this limit isn't as onerous. The speed of syncing is entirely dependent on the speed of the network between the machines in question. Whenever possible, BitTorrent syncs only block-level changes in files to speed things up.

Sync Pro, which costs $39.99 per user per year (there’s a free 30-day trial), removes the 10-folder limit and adds more granularity to folder syncing. A folder can have selected files synced (again, à la Dropbox) so that devices with limited storage won't end up suffocating under the load. Pro also provides per-user controls, so users can be given folder permissions -- read only, read and write -- and can be allowed to delegate folder access to other users.

BitTorrent Sync's decentralized structure works both for and against it in an enterprise setting. The main drawback (or advantage, depending on how you look at it) is that, like its file storage, its users and permissions system is decentralized. This means users and permissions can't be managed by way of, for instance, Active Directory; all access has to be set at each peer by hand. For small ad hoc teams of a few people within an enterprise, this isn't bad, but for larger groups, access control will be far tougher to manage.
If BitTorrent Sync can find a way to allow its peer-to-peer structure to work elegantly with existing enterprise infrastructure, it’ll be a major plus. As it stands, it’s best suited for teams of a few people that don’t mind doing a little heavy lifting.

More Info :- InfoWorld