Breaking

Friday, January 30, 2015

1/30/2015 04:20:00 PM

Microsoft kicks off C# 7 language planning

The design team working on C# are examining data management, performance.

Designers are off and running with plans for the next generation of Microsoft’s C# language, with key themes centering on data management, performance, and reliability.

According to meeting notes for the C# team, posted earlier this month on GitHub, the team is looking beyond the planned version 6.0 of the type-safe, object-oriented language. As posted by Microsoft’s Mads Torgerson, a design team member, the notes say: “This is the first design meeting for the version of C# coming after C# 6. We shall colloquially refer to it as C# 7.” Likely themes to investigate for C# 7 include working with data; performance, reliability, and interop; componentization, distribution, and meta programming.

In accordance with theme of working with data, possible C# features could include pattern matching, “denotable” anonymous types, working with common data structures, slicing, and immutability. The notes state that today’s programs are connected and trade in rich, structured data -- what's on the wire, what applications and services produce, manipulate, and consume. Traditional object-oriented languages, while good for many tasks, deal poorly with this setup.

 
 
C# builders must look at functional languages to deal with this, the notes stress. “Functional programming languages are often better set up for this: data is immutable (representing information, not state), and is manipulated from the outside, using a freely growable and context-dependent set of functions, rather than a fixed set of built-in virtual methods,” the notes say. C# followers need to keep being inspired by languages including F#, Scala, and Swift.

For performance, reliability, and interop, the team states that C# has had a history of being “fast and loose” in performance and reliability. “Internally at Microsoft there have been research projects to investigate options here. Some of the outcomes are now ripe to feed into the design of C# itself, while others can affect the .Net Framework, result in useful Roslyn analyzers, etc,” Torgerson’s notes say. “Over the coming months we will take several of these problems and ideas and see if we can find great ways of putting them in the hands of C# developers.”

Concentrating on componentization, the notes state the “once set-in-stone issue of how .Net programs are factored and combined is now under rapid evolution.” Most work in this space is more tooling-oriented, covering capabilities including the generation of reference assemblies, static linking, determinism, NuGet support and versioning. “This is a theme that shouldn't be driven primarily from the languages, but we should be open to support at the language level.”

To help with the distributed nature of modern computing, C# designers are pondering async sequences and serialization. “We introduced single-value asynchrony in C# 5 but do not yet have a satisfactory approach to asynchronous sequences or streams.” For serialization, “we may no longer be into directly providing built-in serialization but we need to make sure we make it reasonable to custom-serialize data -- even when it's immutable, and without requiring costly reflection.”

Meta programming, meanwhile, has been “on the radar” for a long time, with the Roslyn compiler project intended to enable programs about writing programs. “However, at the language level we continue not to have a particularly good handle on meta programming.”

The team is also considering null capabilities as a theme. “With null-conditional operators such as x?.y C# 6 starts down a path of more null-tolerant operations,” the notes say. “You could certainly imagine taking that further to allow e.g. awaiting or foreach'ing null, etc. On top of that, there's a long-standing request for non-nullable reference types, where the type system helps you ensure that a value can't be null, and therefore is safe to access.”

Also with C# 7, designers  are pondering pattern matching, providing a way of asking if a piece of data has a particular shape, then extracting pieces of it. Array slices, meanwhile, would boost efficiency by providing a “window.”

“Array slices represent an interesting design dilemma between performance and usability. There is nothing about an array slice that is functionally different from an array: You can get its length and access its elements,” notes state. “For all intents and purposes they are indistinguishable. So the best user experience would certainly be that slices just are arrays -- that they share the same type. That way, all the existing code that operates on arrays can work on slices too, without modification.”
 \
Design notes state that while input and openness is sought in the development of C#, ultimately, decisions are made by the design team. “It's important to note that the C# design team is still in charge of the language. This is not a democratic process. We derive immense value from comments and UserVoice votes, but in the end the governance model for C# is benevolent dictatorship."

Read More News:- Techies | Update
1/30/2015 04:16:00 PM

Build an IoT analytics solution with big data tools

The Internet of things seems futuristic, but real systems are delivering real analytics value today.

With all the hype around the Internet of things, you have a right to be skeptical that the reality could ever match the promise. I would feel that way myself -- if it weren’t for some recent firsthand experience.

I recently had the opportunity to work on a project that involved applying IoT technologies to medical devices and pharmaceuticals in a way that could have a profound impact on health care. Seeing the possibilities afforded by “predictive health care” opened my eyes to the value of IoT more than any other project I’ve been associated with.
Of course, the primary value in an IoT system is in the ability to perform analytics on the acquired data and extract useful insights, though make no mistake -- building a pipeline for performing scalable analytics with the volume and velocity of data associated with IoT systems is no walk in the park. To help you avoid some of the difficulties we encountered, allow me to share a few observations on how to develop an ideal IoT analytics stack.

Acquiring and storing your data

Myriad protocols enable the receipt of events from IoT devices, especially at the lower levels of the stack. For our purposes, it doesn’t matter whether your device connects to the network using Bluetooth, cellular, Wi-Fi, or a hardware connection, only that it can send a message to a broker of some sort using a defined protocol.

One of the most popular protocols and widely supported protocols for IoT applications isMQTT (Message Queue Telemetry Transport). Plenty of alternatives exist as well, including Constrained Application Protocol, XMPP, and others.

Given its ubiquity and wide support, along with the availability of numerous open source client and broker applications, I tend to recommend starting with MQTT, unless you have compelling reasons to choose otherwise. Mosquitto is one of the best-known and widely used open source MQTT brokers, and it's a solid choice for your applications. The fact that it's open source is especially valuable if you're building a proof of concept on a small budget and want to avoid the expense of proprietary systems.

Regardless of which protocol you choose, you will eventually have messages in hand, representing events or observations from your connected devices. Once a message is received by a broker such as Mosquitto, you can hand that message to the analytics system. A best practice is to store the original source data before performing any transformations or munging. This becomes very valuable when debugging issues in the transform step itself -- or if you need to replay a sequence of messages for end-to-end testing or historical analysis.

For storing IoT data, you have several options. In some projects I've used Hadoop and Hive, but lately I’ve been working with NoSQL document databases like Couchbase with great success. Couchbase offers a nice combination of high-throughput, low-latency characteristics. It's also a schema-less document database that supports high data volume along with the flexibility to add new event types easily. Writing data directly to HDFS is a viable option, too, particularly if you intend to use Hadoop and batch-oriented analysis as part of your analytics workflow.

For writing source data to a persistent store, you can either attach custom code directly to the message broker at the IoT protocol level (for example, the Mosquitto broker if using MQTT) or push messages to an intermediate messaging broker such as Apache Kafka -- and use different Kafka consumers for moving messages to different parts of your system. One proven pattern is to push messages to Kafka and two consumer groups on the topic, where one has consumers that write the raw data to your persistence store, while the other moves the data into a real-time stream processing engine like Apache Storm.

If you aren’t using Kafka and are using Storm, you can also simply wire a bolt into your topology that does nothing but write messages out to the persistent store. If you are using MQTT and Mosquitto, a convenient way to tie things together is to have your message delivered directly to an Apache Storm topology via the MQTT spout.

Preprocessing and transformations

Data from devices in their raw form are not necessarily suited for analytics. Data may be missing, requiring an enrichment step, or representations of values may need transformation (often true for date and timestamp fields).

This means you'll frequently need a preprocessing step to manage enrichments and transformations. Again, there are multiple ways to structure this, but another best practice I’ve observed is the need to store the transformed data alongside the raw source data.

Now, you might think: “Why do that when I can always just transform it again if I need to replay something?” As it turns out, transformations and enrichments can be expensive operations and may add significant latency to the overall workflow. It's best to avoid the need to rerun the transformations if you rerun a sequence of events.

Transformations can be handled several ways. If you are focused on batch mode analysis and are writing data to HDFS as your primary workflow, then Pig -- possibly using custom user-defined functions -- works well for this purpose. Be aware, however, that while Pig does the job, it’s not exactly designed to have low-atency characteristics. Running multiple Pig jobs in sequence will add a lot of latency to the workflow. A better option, even if you aren’t looking for “real-time analysis” per se, might be using Storm for only the preprocessing phase of the workflow.

Analytics for business insights

Once your data has been transformed into a suitable state and stored for future use, you can start dealing with analytics.

Apache Storm is explicitly designed for handling continuous streams of data in a scalable fashion, which is exactly what IoT systems tend to deliver. Storm excels at managing high-volume streams and performing operations over them, like event correlation, rolling metric calculations, and aggregate statistics. Of course, Storm also leaves the door open for you to implement any algorithm that may be required.

Our experience to date has been that Storm is an extremely good fit for working with streaming IoT data. Let’s look at how it can work as a key element of your analytics pipeline.

In Storm, by default “topologies” run forever, performing any calculation that you can code over a boundless stream of messages. Topologies can consist of any number of processing steps, aka bolts, which distribute over nodes in a cluster; Storm manages the message distribution for you. Bolts can maintain state as needed to perform “sliding window” calculations and other kinds of rolling metrics. A given bolt can also be stateless if it needs to look at only one event at a time (for example, a threshold trigger).

The calculated metrics in your Storm topology can then be used to suit your business requirements as you see fit. Some values may trigger a real-time notification using email or XMPP or update a real-time dashboard. Other values may be discarded immediately, while some may need to be stored in a persistent store. Depending on your application, you may actually find it makes sense to keep more than you throw away, even for “intermediate” values.

Why? Simply put, you have to “reap” data from any stateful bolts in a Storm topology eventually, unless you have infinite RAM and/or swap space available. You may eventually need to perform a “meta analysis” on those calculated, intermediate values. If you store them, you can achieve this without the need to replay the entire time window from the original source events.
IoT analytics pipeline
How should you store the results of Storm calculations? To start with, understand that you can do anything in your bolts, including writing to a database. Defining a Storm topology that writes calculated metrics to a persistent store is as simple as adding code to the various bolts that connect to your database and pushing the resulting values to the store. Actually, to follow the separation-of-concerns principle, it would be better to add a new bolt to the topology downstream of the bolt that performs the calculations and give it sole responsibility for managing persistence.
Storm topologies are extremely flexible, giving you the ability to have any bolt send its output to any number of subsequent bolts. If you want to store the source data coming into the topology, this is as easy as wiring a persistence bolt directly to the spout (or spouts) in question. Since spouts can send data to multiple bolts, you can both store the source events and forward them to any number of subsequent processing steps.
For storing these results, you can use any database, but as noted above, we've found that Couchbase works well in these applications. The key point to choosing a database: You want to complement Storm -- which has no native query/search facility and can store a limited amount of data in RAM -- with a system that provides strong query and retrieval capabilities. Whatever database you choose, once your calculated metrics are stored, it should be straightforward to use the native query facilities in the database for generating reports. From here, you want the ability to utilize Tableau, BIRT, Pentaho, JasperReports, or similar tools to create any required reports or visualizations.

Storing data in this way also opens up the possibility of performing additional analytics at a later time using the tool of your choice. If one of your bolts pushes data into HDFS, you open up the possibility of employing entire swath of Hadoop-based tools for subsequent processing and analysis.

Building analytics solutions that can handle the scale of IoT systems isn't easy, but the right technology stack makes the challenge less daunting. Choose wisely and you'll be on your way to developing an analytics system that delivers valuable business insights from data generated by a swarm of IoT devices.

Read More News :-  Techies | Update
1/30/2015 02:49:00 PM

Can Microsoft make R easy?

Microsoft and R are poised to enter a mutually beneficial relationship.

The R programming language is a key tool for data scientists. It is not, however, easy to learn or use. While some suggest that R and data science in general is inherently complex, there's clearly opportunity for it to be democratized, at least to the point that business analysts can take advantage of it -- which is critical, given how important data has become to running an enterprise effectively.

Embedded in Microsoft’s acquisition last week of Revolution Analytics is the possibility that in the future you may not need to be a propeller-head to effectively use R. Just as Microsoft lowered the bar to becoming an effective system administrator and developer so, too, may its ownership of R help to close the data science skills gap that plagues the industry.

Geeks to inherit R

In some ways, Microsoft had no choice but acquire Revolution Analytics. As much as Microsoft, Oracle, or other tech giants may wish it otherwise, big data is a big deal, and nearly all of the best big data technology is open source. This is why Microsoft has embraced Hadoop, MongoDB, and other leading big data technologies, both for internal use and within the products it licenses to others.

Buying into R, the default programming language for data scientists, makes sense.

While Python has proven popular with an increasing number of would-be data scientists, it’s still the case, as a KDNuggets poll of data science professionals reveals, that R dominates data science (used by 61 percent of responders), compared to Python (39 percent) or SQL (37 percent).

As Gartner research director Alexander Linden finds, “A lot of innovative data scientists really favor open source components (especially Python and R) in their advanced analytics stack.” The statement is true, but it also implies R’s Achilles' heel: It’s hard to use (Bob Muenchen offers a few reasons why), a tool for the “innovative” and “advanced.”

Many have been willing to forgive R for this sin because, as Tal Yarkoni speculates, “even people who hate the way R chokes on large data sets, and its general clunkiness as a language, often [can’t] help running back to R as soon as any kind of serious data manipulation was required.”
It’s hard, but it’s powerful.

Microsoft to the rescue

But what if it could be easy and powerful? Companies like Datameer promise to democratize data science, but arguably no other company has more potential to do this than Microsoft.

Microsoft has a long history of making complicated technology simple to use. Love it or hate it, Microsoft has done more to democratize technology than any other vendor.

Could Microsoft do the same for R? Definitely maybe.

A certain amount of complexity is inherent in R, of course. Red Hat’s Dave Neary argues, “R is for statistics and numerical analysis,” requiring an “understand[ing of] the math to some degree.” He goes on to suggest, “Saying it's too hard for mere mortals is like saying a saw is too hard. [You n]eed to learn the tools.”
The promise behind the deal, however, is that Microsoft can significantly improve those tools, such that a tech-savvy, nonprogrammer can do “data science.”

Or as Henri Yandell humorously responded to my interaction with Neary, it’s like “asking if Microsoft are going to make a power saw for those too lazy to learn how to use a hand saw.” It's not a perfect analogy, of course, but I suspect many will be very happy to be given a power saw for data.

Let’s be clear: That “power saw” is very much what Microsoft seems to have in mind. While few details were offered, Joseph Sirosh, Microsoft’s corporate vice president of Machine Learning, insists that Microsoft plans to improve access to the power of R

As their volumes of data continually grow, organizations of all kinds around the world – financial, manufacturing, health care, retail, research – need powerful analytical models to make data-driven decisions. This requires high performance computation that is “close” to the data, and scales with the business’ needs over time. At the same time, companies need to reduce the data science and analytics skills gap inside their organizations, so more employees can use and benefit from R. This acquisition is part of our effort to address these customer needs.

The plan, then, is to “empower enterprises, R developers and data scientists to more easily and cost effectively build applications and analytics solutions at scale” -- not only data scientists, not only R developers, but also the more pedestrian enterprise customers that Microsoft has sold into for decades.
Analyst Ben Kepes believes there’s promise in such “applied analytics,” and I agree. He writes, “‘Analytics’ is, for the vast majority of people, merely a concept they have no access to. Everyone has heard of ‘delivering insights,’ but few have the ability to do so. Analytics, when applied to core applications and delivered to end users, changes that.”

Assuming Microsoft can deliver, he concludes, this has the potential to deliver “analytics democratized.”

Cause for concern in open source land?

While Microsoft’s ability to democratize R remains an open question, its commitment to R’s open source community is not. Not many years ago, Microsoft buying an open source company would have been impossible. The culture simply couldn’t support it.
But such has been the progress under CEO Satya Nadella that no one even smirks when David Smith, chief community officer at Revolution Analytics, declares:
For our users and customers, nothing much will change with the acquisition. We’ll continue to support and develop the Revolution R family of products — including non-Windows platforms like Mac and Linux. The free Revolution R Open project will continue to enhance open source R.
Of course they will. Not only will Microsoft tolerate it, Microsoft will actually encourage it. It’s a new Microsoft, helping to create a new R. The two need each other, making this an exceptionally interesting development in big data.

Read More News :- Techies | Update
1/30/2015 02:42:00 PM

Apple's liberation, Microsoft's haunting

Apple amazing quarter has set the bar unrealistically high while Wall Street unfairly discounts Microsoft's.

If I were Satya Nadella, I'd run to the nearest church and hire an exorcist to scare away the ghost of Steve Ballmer. Microsoft is getting killed on Wall Street as the market reacts to a disappointing quarter and disappointing guidance.

In the short run, the analysts are right: Microsoft will have another difficult year or two before Nadella can get beyond the horrible mess left by his predecessor, Steve Ballmer.

Over at Apple, meanwhile, Tim Cook has finally appeased the ghost of his predecessor, the iconic Steve Jobs, and today needs a train to haul his company's record profits to Fort Knox.

If anyone requires convincing that Cook has come into his own, a couple of statistics should clear that up: Apple sold 34,000 iPhone 6 and iPhone 6 Plus every hour for the entirety of the December quarter and turned a profit of $18 billion, or about $8 million an hour — that's more profit than any company in history.
There's a reason that many tech executives dream of pulling a Michael Dell — taking their companies private. By its very nature, Wall Street has a short-term mentality — every three months it looks back a quarter and forward a quarter or two — which is why Apple is being rewarded on Wall Street and Microsoft is being punished.

With that in mind, you need to ask if Apple really as good as Wall Street says and if Microsoft really as bad. The answers are maybe and no.

Apple kills it in China, helping overcome U.S. saturation

One of the most striking numbers in Apple's earnings report was the $16.1 billion in sales revenue it reaped in China, which includes Taiwan, Hong Kong, and the mainland — an increase of 70 percent over last year.
Apple is not only doing well in China, it's turning the tables on the local competition. "The incredible popularity of the iPhone 6 and 6 Plus in China in Q4 2014 has led Apple to take first place in the Chinese smartphone market for the first time by units shipped. This is an amazing result, given that the average selling price of Apple's handsets is nearly double those of its nearest competitors," according to a report from Canalys, a market researcher.

That's particularly important because there are signs that the smartphone market in the United States is becoming saturated. It's becoming hard to find people who don't have a cellular plan. Before long, nearly 90 percent of all cellular customers will already own a smartphone, says AT&T CEO Randall Stephens.

China, though, is on an opposite trajectory, with major segments of its population becoming more prosperous and hungrier for the trappings of the good life. A market that large will help insulate Apple from the inevitable slowing of domestic demand.

iPads lose their luster, and no clear iPhone-size is hit coming

However, there is the iPad issue. Sales of iPads, along with tablets by every other major manufacturer, are slowing. Apple sold 21.4 million iPads in the last quarter, down 18 percent from 26 million in the quarter a year ago.

Asked about this on the earnings call with analysts, Cook really had no answer and said he doesn't expect a reversal of that trend in the next few quarters. While maintaining that he's very bullish on the iPad's future, he admitted that the company's understanding of that relatively new market is still incomplete. It's possible, he said, that the upgrade cycle for iPads is relatively long, and the larger iPhone 6 is cannibalizing tablet sales.
Mac sales are strong; in fact, they set a record at a time when PC sales continue to shrink. But the iPhone accounts for nearly all of Apple's growth.

What's next? Cook mentioned that the Apple Watch will go on sale in April and Apple Pay is gaining strength. No doubt the Apple Watch will sell very well when it first hits the market, but I'm far from convinced there will be sustained demand for the product or for Apple Pay. I could certainly be wrong, but both products strike me as solutions in search of a problem.

I’m not at all being critical of Cook and Apple. But it’s hard to believe that the company can continue to grow so rapidly. Some acts are simply too difficult to follow — so you can bet that in a year or so comparisons to this record-breaking quarter will lead Apple's critics to once again to moan, punish the stock, and wish that Jobs were back.

For Microsoft, it's Windows 10 or bust

Figuring out what happened to Microsoft in the last quarter doesn't require a lot of deep analysis. Windows sales got a pretty good bump last year when Microsoft ended support for XP. When that sales bubble inevitably popped, of course the most recent quarter looked bad by comparison.

Still, the rapidity of the Windows sales decline probably surprised Nadella and his team, and it certainly surprised investors who then pummeled the stock.

But that's on Ballmer, not Nadella. Windows 8, as we at InfoWorld have written many times, was a disaster. It's a millstone around the neck of a company that has yet to break with the PC-centric worldview of the past.

Nadella wants to make that break, and he arguably has the vision to do so, as my colleague Galen Gruman wrote earlier this week. What's more, Microsoft has a strong server product line and a solid cloud business with Azure, and they are unharmed by the repeated failures on the Windows and Windows Phone client sides.

There's a huge well of creativity locked inside Microsoft, and with Ballmer gone we'll see innovative new products like the HoloLens virtual-reality technology demonstrated at the Windows 10 event in Redmond earlier this month. True, that product was developed by the old guard, but the point remains: Microsoft is not filled with old fogies unable to think beyond Windows and Office.

It's worth remembering that Microsoft has a history of stumbling with early product launches, then recovering. The Surface tablet, for example, looked like a turkey when it first debuted and cost the company nearly a $1 billion in losses.

But Microsoft said Surface revenue hit $1.1 billion for the most recent quarter — up 24 percent year on year — driven by the Surface Pro 3 and accessories. Wisely, the company appears likely to kill off the nearly useless Windows RT version.

The real test will come in the back half of this year, when Windows 10 goes on sale. Many millions of consumer and business users are working on PCs that date back to 2009, the year Windows 7 debuted.
Even with the longer lifecycle of PCs these days, many of those people are more than ready to upgrade, a step they wouldn't take when it meant coping with the unfamiliar, frustrating interface of Windows 8. If Windows 10 is as good as the early releases lead many reviewers to believe, users will switch in a hurry.
According to a Computerworld analysis last year, Microsoft should be able to get about 18 percent of all Windows PCs onto Windows 10 within seven months of its release, which would be an uptake speed record for Microsoft.

Profits, however, won't take off right away. Microsoft will offer free upgrades for a year, a smart move, but one that will keep the bottom line from growing as fast as Wall Street demands.

Of course, Microsoft's future can't be only Windows 10. But ultimately revenue from the new operating system will stabilize the company and give Nadella time to implement his vision of a modern Microsoft. Wall Street may not be so patient, but customers should be.

Read More News :- Techies | Update
1/30/2015 02:40:00 PM

Amazon's next targets: Microsoft Exchange and Gmail

Amazon debuts WorkMail, cloud-hosted email for business, but its cloud still draws more attention.

Following a flurry of left-hook announcements aimed at its cloud competition over the past few months, Amazon is going for a body blow with corporate cloud-hosted email business.

Word of Amazon's new Workmail service began circulating on Wednesday, with a full launch due in Q2 of this year. Even with the scant details currently available, it's clear Amazon's ambition is to heavily bait the hook and lure business users away from Google's Gmail and Microsoft Exchange.

Most of WorkMail's offerings, as described in detail by Forbes's Ben Kepes, should sound familiar to any IT administrator. Aside from hosted email, it also provides calendaring and resource booking, contact lists and task management, and public folders. In addition, it's meant to integrate with either cloud-based or on-premises directory services. It should work transparently with existing email clients like Outlook, as well as any client that uses Exchange ActiveSync, but a Web client such as Gmail or Office 365 is available.

Amazon rides hard on other advantages in security and ease of migration. With security, Amazon ties in another recent announcement: KMS (Key Managament Service), its cloud-hosted encryption key vault. Keys from KMS, including those provided by the customer, can be used to encrypt email at rest. Users also have control over the geographic regions in which their WorkMail data is stored. Finally, Amazon is said to be providing ways to migrate from an existing Exchange store, although it isn't clear if other providers (such as Google) will get tooling by launch time. The service is set to cost $4 per user per month, but email boxes allow only 50GB per user.

Amazon has established itself as a major -- if not the -- leader of cloud infrastructure. Apart from AWS, few offerings have set standards, even if only de facto, for how the cloud works. But it has made less of a splash when attempting to horn in on the enterprise desktop. WorkSpaces was meant to compete with VMware for providing virtual desktops at competitive rates, but left open questions of compatibility, offering what amounted to a reskinned version of Windows Server 2008, not Windows 7. Zocalo document-management service was designed as a further complement to Workspaces, but held little incentive for anyone already using Box, Dropbox, or Google.

WorkMail, by contrast, seems more deliberately designed to sway new users, in big part by providing the kind of granular control over data -- encryption in particular -- that other mail providers presumably skimp over or dance around. The major competition, Microsoft, already has some of the same functionality, if not quite in the same form. Microsoft has added message encryption to Office 365, but mainly as a way to protect individual messages in transit. While Microsoft offers geolocation for customer data, it's in an automatically determined form. According to its documentation for Office 365 users, "the customer’s country or region, which the customer’s administrator inputs during the initial setup of the services, determines the primary storage location for that customer’s data."

Read More News :- Techies | Update

Wednesday, January 28, 2015

1/28/2015 08:20:00 PM

29 tips for succeeding as an independent developer

Find out how to cut corporate ties and transform your coding chops into a thriving business.

Software developers face a challenging employment landscape. On the one hand, the main advantages of corporate employment -- job security, career advancement opportunities, retirement benefits, health insurance -- are disappearing. At the same time, the demand for programming skills continues to grow and grow. No wonder, then, that many developers are considering the independent path.

But before you pack up your cubicle and hang up your shingle, read up on these 29 tips from a developer who has traveled the independent road, took notes along the way, and reaped the rewards. Get the big picture and best practices from a developer who's taken his core knowledge of coding and turned it into a sustainable, successful business.

In this downloadable PDF, InfoWorld offers pointers on the challenge of going independent, particularly the transition from holding a job to starting (and being) your own business. InfoWorld contributor Steven A. Lowe covers contracts, marketing, skill sets, organization, clients, and more for developers who want to go it alone and succeed.

Yes, you can live the dream of being your own boss, setting your own hours, and working only on projects that interest you -- not to mention making a lot of money -- as an independent developer, but prepare yourself for the reality as well. Find out how in InfoWorld's special report.

Read More News :- Techies | Update
1/28/2015 08:16:00 PM

Win with APIs by keeping it simple

Whether you’re making consumer products or business software, complexity usually means failure.

Less is more. We all know examples where this philosophy has resulted in products that are incredibly easy to use and, as a result, extremely successful. Apple is renowned for its laser focus on design, stripping away any excess and refining what’s left again and again. Google Search is another famous example of simplicity leading to massive adoption.

Can the “winning through simplicity” approach apply to APIs too? Are there steps we can take when designing, building, and operating APIs that will keep APIs simple and increase their chance of success?
Indeed there are.

Make it simple to understand

You don’t need to read a manual to figure out how to use an iPhone, and you shouldn’t need to spend days figuring out how an API works before you get started.

Whether you consider Facebook life-changing or irritating, its API is mostly intuitive and simple to use. Once you get beyond authentication (the hardest part), the rest is trivial.

Typing https://graph.facebook.com/534822875 into the browser returns information about the user with the ID 534822875. This happens to be me. I get back a snippet of JSON text that looks like this:
{   "id": "534822875",   "location": {       "id": "114952118516947",       "name": "San Francisco, California"   }}
Facebook built a shortcut into its API: using /me to represent the user currently signed in. I could have typed in https://graph.facebook.com/me and received the same results as above.

The rest of the API can be navigated with a quick glance at Facebook’s API docs. In fact, I can almost guess the capabilities of the API and the names of the functions based on my knowledge of the Facebook application itself: friends, photos, likes.
For example, https://graph.facebook.com/me/likes returns a list of everything I ever liked on Facebook.
{
   "data": [
       {
           "category": "Media/news/publishing",
           "name": "The Guardian",
           "created_time": "2014-08-01T01:32:17+0000",
           "id": "10513336322"
       },
       {
           "category": "Movie",
           "name": "The Lives of Others Movie",
           "created_time": "2014-07-31T15:47:14+0000",
           "id": "586512998126448"
       }
   ],...
}
The API is intuitive and you can almost guess the names of the resources available within it.
Guess what I would type to get the list of my photos? Bingo: https://graph.facebook.com/me/photos
Here’s another example. Want to get those same photos from Flickr? Try:

https://api.flickr.com/services/rest/?method=flickr.people.getPhotos

People will give dozens of technical reasons why they don’t like the Flickr API: It’s not RESTful and so forth. But at the most basic level, the problem is it’s not simple. It’s not intuitive or memorable or easy to understand. As a developer, I’ll always have to go back to the documentation. That by itself will hinder adoption.

By contrast, the Facebook API is simple to understand at a glance. It’s easy to get started and get going. That’s why more than 200,000 developers have used it to create more than 1 million apps.

Make it simple to use

Following on from the “make it simple to understand” principle is taking a RESTful approach to modeling APIs. This means using HTTP resources and actions the same way a browser interacts with the Web. This will give you the benefit of modeling your API after real-life items and relationships in your business. How you talk about your API will map closely to how your employees, customers, and users talk about your business.

Take a look at the following RESTful API URLs, which allow you to retrieve accounts or get a specific bank account.
# list all accounts
GET /accounts
# show a specific account
GET /accounts/{account-number}
The following returns an invoice for a specific account. It’s modeled on a real-life example.
GET /accounts/6242525/balance
You can imagine a customer on the phone saying, “My bank account number is 6242525. Could you tell me my balance?”

Taking this approach means you end up with a common and meaningful language throughout your API that maps to your business and makes it easy for developers to understand. Another benefit of following RESTful practices is that many API calls can be easily invoked via your browser address bar, so trying them out is relatively simple.

Simple data formats

JSON is becoming the de facto data exchange format for a very good reason. Developers are tired of working with verbose formats like XML. They no longer want to write custom parsers and prefer a lightweight option that's easy on the eye. JSON solves many of these problems. In addition, if you -- like many API users -- are working in JavaScript, it’s easy to bring JSON into your code.

JSON simply maps values to keys. Looking back at the example earlier, you can see that the name of the item I liked was The Guardian newspaper, and the point in time when I said I liked it was 2014-08-01.
{   "category": "Media/news/publishing",   "name": "The Guardian",   "created_time": "2014-08-01T01:32:17+0000",   "id": "10513336322"}

Navigating your API

By providing pagination links as URLs, it's really easy to navigate an API, which is especially important when dealing with large sets of data. Instead of requiring users to figure out where to go next by manually constructing URLs, your API can tell them. The following is a good example from Facebook’s Open Graph API, in which all responses return a “paging” attribute that tells users where to go to get the previous or next set of data. It’s simple, and it has the added value that a bot or crawler could navigate its way around your API.
{    "paging": {
       "previous": "https://graph.facebook.com/me/albums?limit=5&before=NITQw",
       "next": "https://graph.facebook.com/me/albums?limit=5&after=MAwDE"
   }
}

Flexible usage

Giving users control over how the API operates is a powerful move. There are many ways to do this, including the ability to sort using various rules, searching, and filtering. Field filtering is the ability to specify which fields should be returned in the response data. This can be especially useful in cases where there are bandwidth or performance issues or where a user cares about only a small subset of the API response data. Another benefit of filtering is that it limits the need for the API creator to have “different flavors” of the API because users can tell you what they care about.

https://graph.facebook.com/me/likes?fields=category,name
The API call above would tell the API to only return the fields “category” and “name” in the response.
{
   "category": "Media/news/publishing",
   "name": "The Guardian",
}

Make it simple to maintain

Every API needs to support versioning for backward compatibility. It should also give users a way to bind themselves to a specific version if necessary. The easiest way to do this is to implement a version number into a base URI and to support the latest API version under a versionless base URI. Doing both gives API client developers the choice whether to lock themselves into a specific version of the API or always work with the latest one available.
# versioning
/api/v1.0/accounts/12345
/api/v1.1/accounts/12345
/api/v2.0/accounts/12345
# alias no-version to the latest version
Companies need to be careful about changing the rules around the use of their APIs. Changes need to be vetted and carefully communicated to prevent developer revolts.

SSL

Security is critical. Provide access to your APIs over a secure HTTPS connection using SSL. This will encrypt and verify the integrity of the traffic between your user and the server, as well as prevent man-in-the-middle attacks. It’s easy to set up. Don’t even make your APIs available over an unsecured HTTP connection.

Make it simple to operate

When you get to an operational API with hundreds or thousands of users, you will need to consider a whole suite of advanced features.

Access control

You will need to provide developers with keys to access your API. Hackers may try to break in, and you will want to limit API traffic by user, to prevent developers from crashing your API -- intentionally or otherwise.

Monitoring

You will need to know when your API becomes unavailable and must be restarted. You will also need to know how your API is being used: by whom, where, and when.

Integrations

You may need to provide your users with analytics and reports on how they are using your API, as well as how often. You may need to integrate a billing system to charge for use of the API.

Some of these features are not easy to build. In fact, adding them is probably one of the more complicated aspects of the API development lifecycle. Once again, the key is to keep it simple. Don’t attempt to roll your own solution. Instead, use an off-the-shelf API solution to manage your APIs. Direct your energy into the business aspect of the APIs and what they can do to differentiate you from your competitors.

APIs that are simple and easy to use are much more likely to be adopted internally, making your architecture and business more flexible. They’re also more likely to be adopted by outside developers, potentially opening up new channels of revenue. Creating a successful API product suite can transform your business, drive revenues, and unleash innovation. But remember to keep it simple.

Read More News :- Techies | Update
1/28/2015 08:09:00 PM

Install once, fail twice: A tale of shoddy upgrade

Two is the unlucky number for one company where an Internet upgrade starts off wrong.

Make a list and check it thrice -- and remember to check it again when the unexpected occurs.
A couple years ago, we purchased a new office in a distant state. It already had phone and Internet access provided by one of the large national telecoms, but due to the sparseness of booming industries in the area, the bandwidth was abysmal. The 6Mbps down/512Kbps up connection wasn’t sufficient, but there was no other option. The telco assured us we'd hear back as soon as upgrades were available.

In the meantime, the remote site suffered through with weekly, hours-long outages, but no upgrades were scheduled. We had to endure.

Hooray for the upgrade!

Almost a year passed, and I was on the phone with the telco once again during an outage. While the tech was recording my information, he said, “Has anyone told you our superduper Internet speed is now available at that location?” No, I told him, and asked for a quote for the upgrade.

Amazingly, the quoted price represented a 40 percent decrease in the monthly bill and a nearly eight-fold increase in Internet speed -- a deal I couldn’t pass up. I wanted it right then and there, but since I already had a trouble ticket in the hopper I wanted the location functioning before we rolled over to another service. I told him I would get back to them on Friday.

The following day, everything was back up and I called to upgrade to the higher speed. I reiterated that our phones and fax would remain the same and we wished to keep our static IPs. I was told that would not be a problem, and they could be there Monday morning at 8:30 a.m. This move looked even better to me in light of the prompt service.

Monday morning came, and I checked my monitoring system. I could see the remote site was up, but since it was in a later time zone that was fine. At 9:45 my time, it went dark. I assumed the install was in progress, and an employee at the site assured me a telco truck was out on the main road. I figured an hour, maybe two, and we would be back in business at warp speed. When will I ever learn?

Trouble in paradise

Lunchtime came, and the location was still down. I called the site manger’s cell: No trucks anywhere to be seen. I spent lunchtime playing phone tag with the telco, only to discover that its procedure was to drop the service one day, then a new team would hook up the service the next day. Yes, the company was aware it was a business site but said it would be impossible to bring the teams online today, even if we were rolled back to the old service -- no techs available. I should have known.

The following morning, the remote site was back up with the new speed, so we were able to regroup and complete the prior day’s work. All seemed to be well.

Five weeks later, I got a phone call from the remote site: The fax machine wouldn’t send or receive. Puzzled as to what could be wrong, I began my diagnostic routine and ended up back at the install.

I quizzed the employees as to when they noticed that the fax not working. Of course, no one could remember. Being 500 miles away, I decided to contact the telco that did the upgrade and report my suspicions. The telco agreed to send a tech the next morning, stating that if it was at fault, there would be no charge; if not, we'd pay (dearly). Still, it would be cheaper than airfare.

As it turned out, it was the tech’s fault, so we were not charged, but it added another item to my checklist during upgrades. Though the telco didn’t need to or was supposed to touch that line, check it anyway right after the work is completed. You never know.

Read More News :- Techies | Update
1/28/2015 08:09:00 PM

Hadoop gets serious about data governance

Hortonworks launches a comprehensive three-point plan to shore up data governance in Hadoop.

For all of Hadoop's growth and acceptance as an enterprise technology, it still lacks key features. Some of its shortcomings, such as consistent data governance methodologies, are too important to ignore any longer.

Hortonwork, a major Hadoop vendor, is attempting to address the problem by creating the Data Governance Initiative for Hadoop. The goal is to incorporate governance for data in Hadoop's design, rather than as a good idea in the abstract.

Hortonworks, creator of the HDP distribution (now supported on Google Cloud Platform), has approached the problem from several fronts. On one front, there's Hadoop itself, where Hortonworks has been attempting to address the need for better data governance and security technology by contributing to related Apache Foundation projects. One such project, Apache Ranger, was derived from a closed-source product that Hortonworks bought and transformed for Apache; another, Apache Falcon, aids with the mechanics of managing data lifecycles (intake, purging, and so on). With these moves, Hadoop and its associated projects will have a common set of mechanisms for dealing with data security, both inside and outside of Hadoop.

On another front, Hortonworks has collaborated with companies using Hadoop in the field and at scale -- Target, Merck, Aetna, and SAS, specifically -- to implement the proposed measures. By doing much of this work out in the open, Hortonworks hopes others will come on board and not simply because the pieces are being rolled into the projects underlying Hadoop.

A third approach, auxiliary to the other two, works with existing enterprise data-governance technology, but not to replace it. Tim Hall, vice president of product management at Hortonworks, explained in a phone call that the point of this initiative is not to give a company that uses -- or is considering -- Hadoop a substitute for its existing tool set, but a complement to it.

"Hadoop augments your existing data architecture," Hall explained. "It allows you to modernize it and cost-effectively land massive volumes of data into it; this isn't a rip-and-replace strategy. We want to build this [initiative] with the intent of ensuring that we can provide the metadata, the policy access, et cetera, to third-party data governance tools, so that regardless of where you're trying to look at information, you get a consistent view." In this case, "consistent" refers to enforcing the policies set on the data and the access controls meant to protect it.

According to Hall, the main problem with third-party governance has to do with the tools seeing the edges of Hadoop, but being unaware of what goes on inside it. "If you have to stitch a compliance report together," Hall said, "you're like, 'Well, I sent it to my Hadoop infrastructure, but I don't know if Johnny or Suzy hacked up the data a hundred ways to Sunday.' I know it came out the other end looking like this, but what happened in the middle?"

Other vendors' approaches to solve this problem, Hall noted, work only as long as every job created is written using the vendor's tooling. As he put it, the idea is to have "comprehensive visibility [from the core of Hadoop] regardless of whether their tool was used or not."

Hortonworks' short-term plan, currently under development, is to create a working prototype that implements the most core functionality: a REST API, a centralized taxonomy, import/export metadata, and so on. Following that, the next push will be formally announced at the February Strata conference, with the features rolled into a future releases of HDP "as they land throughout the year."

It also remains to be seen how other major Hadoop vendors pick up and elect to extend on this work. MapR, Pivotal, and Cloudera all have a stake in open source Hadoop, so any security and governance developments affecting Hadoop's core will likely become key parts of their own distributions. In turn, each would have new ways to contribute back -- and new hooks for proprietary value-adds to distinguish themselves from each other.

Read More News :- Techies | Update
1/28/2015 08:07:00 PM

New Aurelia framework wants in on JavaScript riches

New Aurelia framework wants in on JavaScript riches.

A wide variety of technologies has piggybacked JavaScript, including Famo.us, AngularJS, Meteor, and Node.js. A new entrant, Aurelia, offers another angle as a modular framework enabling customization and accommodating the latest ECMAScript technologies.

Built by Rob Eisenberg, formerly of the AngularJS team, Aurelia was introduced this week. It is currently in an early preview stage and not feature-complete. Aurelia is being developed by Durandal Inc., with a focus on Web programming.
Eisenberg lauds the framework for its standards compliance and conventions. “Aurelia fully embraces the future and is the first framework designed from the ground up for ES6 and other ES7 technologies like Object.observe,” he says in an email. “We also favor simple conventions, which allow you to write code that is very clean. Most code written with Aurelia is plain JavaScript with no dependencies on custom API calls or special base classes.”

The open source framework “is highly modular in ways that let developers customize it or even use its component parts to build their own framework,” Eisenberg continues. “For example, our adaptive binding engine -- new in its own right -- is completely decoupled from binding syntax or templating. This means that developers can use it on its own to build new things we haven't even thought of yet.”

Because Aurelia is modularized and composed of small libraries, developers can use the entire framework to build apps, use traditional libraries to build websites, use libraries on the server with Node.js, or even build custom frameworks. It employs simple conventions to eliminate boilerplate code, the Aurelia website says.

“With many frameworks or libraries, you need to register classes with the framework, inherit from special base classes or provide custom metadata before the framework knows what to do with your code,”

Eisenberg says. “With Aurelia, you don't need to do that most of the time. Instead you write plain classes and follow various naming and project structural patterns, much like in something like Rails.” The framework can infer many items and a lot of work on the developer’s behalf, he says.

The platform leverages an extensible HTML compiler for customizing markup and a data-binding engine for binding between vanilla JavaScript and DOM. It also supports development via JavaScript variants such as CoffeeScript.

Up next for Aurelia will be a beta release to add missing features and performance optimizations. At some point, Aurelia will be solid enough for production use, says Eisenberg. While he is the original developer, a core team of 12 people is actively contributing to it.

Read More News :- Techies | Update
1/28/2015 08:05:00 PM

Appery.io pairs mobile app builder with back-end services

The Appery.io online mobile development platform crosses categories.

Appery.io is a rather capable cloud-based mobile Web and hybrid mobile development platform with online visual design and programming tools, as well as integrated back-end services. You can think of it as a cross between an app builder and an MBaaS (mobile back end as a service).

As we can see in Figure 1 below, the Appery.io app builder generates HTML5, jQuery Mobile, and Apache Cordova code, and the Appery.io build server generates iOS, Android, Windows Phone, and HTML5 apps. The Appery.io MBaaS provides hosting, a MongoDB NoSQL database, push notifications, JavaScript server code, and a secure proxy.

Tuesday, January 27, 2015

1/27/2015 02:26:00 PM

6 big changes coming to Fedora 22

Systemd, Python 3, and Docker deployments are a few changes under consideration for Fedora 22.

Hold on to your (red) hats. Fedora 22, the next iteration of the "move fast and break things" version of Linux sponsored by Red Hat, is set to arrive on May 19. After the multiple editions introduced in the previous Fedora, what's in store this time?

The answer lies with the proposals received by the Fedora Engineering and Steering Committee (FESCo), whose deadline for proposed changes passed last week. Here are some of the more notable and head-turning proposals for Fedora 22 that seem most likely to make it to the final product.

Splitting systemd

Systemd, the proposed replacement for Linux's sysvinit startup module, has sparked controversy left and right due to its monolithic design. Schisms in the Linux community have erupted over its use, but Red Hat appears to be firmly in the "yes" camp, having added systemd to RHEL 7.

But even Red Hat seems to recognize there are limits to the functionality that can be stuffed into systemd. Among the pending proposals, one suggests splitting the systemd-units subpackages into their own item, reducing the size of the installation footprint for minimal builds. Keeping Fedora -- and by extension Red Hat -- on the leaner side is part of the mission for the Cloud edition of Fedora and might be a good objective for any version of Fedora.

Another recently added systemd feature, the ability to pull down and locally install container images, might spark introspection along the same lines. On the one hand, the feature could be useful in the ongoing push to make Fedora and RHEL more Docker-centric; on the other hand, it might be further evidence of systemd's excess.

Python 3 by default

In theory, Python 3 is the better of the two Pythons -- if only because it has a guaranteed future. In practice, the sheer size of the Python 2 user base and application-dependency base has made it tough for Python 3 to gain much ground.

Here, the idea isn't only to provide Python 3 as the default Python interpreter for Fedora 22; software packages within Fedora bearing Python dependencies (such as Anaconda) should also use Python 3. From this point of view, little within Fedora itself will block this move, since most Python-dependent applications already use Python 3 or can be converted to it with minimal effort. Also, any future Python-dependent projects for Fedora will be built for Python 3 by default. For applications that need it, Python 2 will be available through Fedora's repositories.

DNF, the new installer

Yum is the package manager/installer for Fedora, but a new installer, Dandified Yum (DNF), is set to replace Yum in Fedora 22. DNF's advantages include being "built on modern [dependency-solving] technology allowing for faster and less memory-intensive operation," and a full Python API that works with both Python 2 and Python 3.

For those hung up on Yum, it can run side by side with DNF. Work on DNF as a replacement has been in the works since 2012, with an experimental version available since Fedora 18. By now, Fedora is confident that DNF can replace Yum completely.

A new database server role

The most recent version of Fedora provided three editions of the project for different workloads: workstations, servers, and cloud deployment. In addition, Fedora introduced the notion of roles, one-click installations for all the components needed in a given server, along with tools for building the roles. One previously proposed role was a domain controller that uses FreeIPA to interoperate with Microsoft Active Directory.

Another proposed role applies to databases -- one to deploy a PostgreSQL-powered database server that can function as a primary or replica server. Also under the scope of the proposal is the creation of a plug-in for D-Bus as part of the role's deployment and monitoring functionality.

Gradle

Gradle, the Java build system favored by Google for Android, is under consideration for Fedora 22. But the Fedora team's ambitions for Gradle extend beyond providing it for developers; Fedora also wants to use Gradle to build Fedora packages. The benefit lies in the automation; developers generating RPM packages for Fedora can leverage Gradle for the work.

Fedora Atomic Host

With Project Atomic, Red Hat devised a standard pattern for using a lean-and-mean version of its OS as a deployment system for Docker containers. Previously, Fedora implemented the Project Atomic pattern as a Docker host for the cloud.

The next version will allow Fedora Atomic Host to be deployed to a slew of other targets: specific cloud providers (EC2, Google Compute Engine, and generic OpenStack deployments); LiveOS images that can be booted via PXE on diskless systems; and any bare metal, by way of Fedora's Anaconda installer.

Read More News :- Techies | Update
1/27/2015 02:24:00 PM

Google Web Toolkit dumps compatibility for sake of upgrades

Two new versions of Google Web Toolkit are due, featuring support for Java 8 but sacrificing backward compatibility.

Google Web Toolkit (GWT), which lets developers build browser-based applications in Java and deploy them in JavaScript, is on track for major enhancements this year. The GWT road map calls for two upcoming upgrades, but the latter will break compatibility.

The technology was the subject of the GWT.create conference in Silicon Valley late last week, where Google senior engineer Ray Cromwell talked about its direction. With GWT 3.0 due around the fourth quarter of this year, plans call for breaking compatibility with previous releases so that developers can deprecate older technologies. Previously, compatibility was rigorously maintained.

“Now, because IE6, IE7, and IE8 are dead and there’s certain legacy things that we don’t want to support anymore because we need to target newer browsers and this new world of mobile, we want to deprecate these things,” Cromwell said. Developers who recompile apps to GWT 3.0 might find them failing and will need to edit code to get them to work. But GWT builders will continue developing the 2.x line. “We’re not going to leave those people out to rot,” said Cromwell.

GWT 3.0 will back idiomatic module/class generation featured in the ECMAScript 6 specification underlying JavaScript. ECMASCript 6-compliant code would be generated out of the compiler. In addition, GWT 3.0 includes more Java-friendly calling into JavaScript, via JS Interop Phase 2.

GWT 3.0 also covers Elemental 2.0 for browser API bindings. “Elemental 2.0 is basically a library that you can run yourself at any time, and it will actually go out and fetch the browser specifications from the WC3 consortium, download the latest copies, and it will generate all of the Java interfaces for the APIs,” Cromwell said.

Version 3.0 will feature faster Java collections for the likes of array lists and maps; it highlights DeltaJS support for updating JavaScript code too. “DeltaJS basically allows your app to ship down only the JavaScript code that’s changed,” when doing a new version of an application, Cromwell said. “It allows you to efficiently update like native apps do.”

The ServiceWorker API, which handles network requests in the browser, will be supported. It allows the use of applications offline. “If you’re on an airplane, you can have an application that launches while you don’t have Wi-Fi,” said Cromwell. Firefox and Chrome already support ServiceWorker, said Cromwell.
In the meantime, GWT 2.8 is due out within three months. Its main feature is support for Java 8, including lambdas. GWT 2.8 supports CSS 3.0 via Google Style Sheets (GSS), along with better debugging. JS Interop is featured for interoperability between Java and JavaScript. “It allows you to do cross-language calls between the languages very easily.” Also, users can look forward to faster production compiles in version 2.8.

Overall, Cromwell said, GWT produces small and fast code compared to handwritten JavaScript, and it leverages Java development skill sets, better performance, and cross-platform code-sharing with Android, as well as iOS via technologies such as RoboVM, he said. “There’s value in being able to share [about] 70 percent of your code base between your platforms.”

A developer at the conference, however, wanted Java 8 lambda capabilities sooner. “My only disappointment so far is I was really looking forward to Java 8 support,” said Arien Talabac, developer at security trainer SANS Institute. “It was hoping today I could use it. It sounds like it’s months away.”

Lambdas reduce the need to write boilerplate code, he said. Still, Talabac remains a GWT advocate. “I like the concept of it in that JavaScript is a fundamentally difficult language to write large application in,” he said. “Java is a better choice for it. You’re more able to write an enterprise-scale application.”

Read More News :- Techies | Update
1/27/2015 02:22:00 PM

CoreOS Rocket hits second stage

With the release of Rocket 0.2.0, CoreOS hints that its developers are trying to learn from Docker's legacy.

CoreOS, makers of the Docker-powered superminimal Linux distribution, turned heads and rattled eyeteeth when it announced an alternative container technology.

Amid arguments both for and against CoreOS' App Container and Rocket, a new revision -- Rocket 0.2.0 -- surfaced late last week and brought with it a clearer idea of how CoreOS wants the new App Container format to embody simplicity and security by default.

In a blog post last Friday, Jonathan Boulle of CoreOS announced new features for Rocket 0.2.0 and new details for the still-evolving container spec used by Rocket. The client now has a number of lifecycle commands for checking and disposing of containers, as well as a signature validation mechanism, "a small but important step towards our goal of Rocket being as secure as possible by default," Boulle wrote.

That last feature -- and its implementation this early in Rocket's lifecycle -- is meant to contrast with Docker's approach to container signatures and validation. Docker originally offered little in the way of verifying container integrity and didn't add more rigid features for verifying signatures until version 1.3, released last October. CoreOS, however, perhaps learning from Docker's history, is building integrity-guaranteeing functionality into Rocket as early on as possible.

CoreOS is also trying to ensure that at least as much work is done around the App Container (appc) spec, rather than only the client itself (an implementation of the spec). The spec "continues to evolve but is now stabilizing," and two new implementations have also surfaced: jetpack (for executing appc containers on FreeBSD) and libappc (a C++ library for appc).

The single biggest challenge that CoreOS faces with App Container and Rocket isn't to make it a point-for-point technological match for Docker or even a superior project from a technical standpoint. Both Docker and App Container/Rocket are open source projects, so rapid evolution and keeping abreast of other developments are par for their respective courses.

Rather, the challenge is in how Docker has managed -- in a startlingly short time -- to generate a large culture of related third-party software, tools, and platforms. Making current Docker-powered services and Docker-centric tools support Rocket/App Container is more than a technical decision; the demand and tangible benefits have to be there. CoreOS may be able to accomplish some of that through its existing following, but little question remains that Docker is still the default for most people considering containers.

Read More News :- Techies | Update Or InfoWorld
1/27/2015 02:15:00 PM

Downloading Windows 10 build 9926?

Here’s what to do immediately after you install the Windows 10 January Technical Preview.

Few people expected it, but Microsoft released the Windows 10 January Technical Preview bits at 9 a.m. PT on Friday. If you’ve been running the earlier tech previews, it’s one small step back and several giant steps forward. Stability reports are excellent -- I haven’t had any problems on a dozen machines.

If you don’t yet have your update and are willing to ride the beta roller coaster, sign up for the Insider Program, choose a suitable PC (or VM), and give it a whirl. Now that Microsoft’s servers have emerged from terminal meltdown, it should be relatively easy.

If you're trying to install the latest Windows 10 build on a Windows 8.1 PC, you may be greeted by the notice "Windows can't be installed because this PC uses a compressed operating system." I got it while trying to put build 9926 on a brand-new Asus Transformer.

The culprit? WIMboot. It's a Microsoft technology that squeezes Windows into a smaller footprint, so Windows can essentially boot from a compressed image. That's great for cheap PCs, but it doesn't play well with the Windows 10 installer. I haven't found any workaround.

As soon as you install build 9926, hurry over to the Update Center and get KB 3034229 installed. Click Start > Settings > Update & Recovery, Check for updates (if you don’t see KB 3034229 waiting for you), select the patch, and choose Restart Now.
Microsoft seems to be sending out its technical directives by Twitter, including a tweet by Gabe Aul suggesting you get KB 304229 installed.
Gabe Aul KB 3034229
If you aren’t yet using Twitter, install it and follow @GabeAul.
Before you start spelunking, be sure you check the Insider Hub. In the new (argh), fixed-size Start menu, it’s most likely below the taskbar, so scroll down the tiled part of the Start menu on the right, and click Insider Hub.

Click the link on the right for Known issues in Build 9926. That’s a list of known bugs, generally mirroring the list posted by MVP Andre Da Costa on the Answers Forum.

Read More News :- Techies | Update
1/27/2015 02:11:00 PM

Celebrating the top software and hardware of the year

The 2015 Technology of the Year Awards is an embarrassment of riches, with more inventive new products.

There's really only one way to evaluate enterprise products: You need reviewers with hands-on expertise who understand the real challenges customers face. At InfoWorld, over the course of many years, we've worked hard to form a network of expert contributors who fit that profile and write reviews you can count on.

Every year, InfoWorld Executive Editor Doug Dineley pulls together our extended family of reviewers to debate the relative merits of contenders and produce the Technology of the Year Awards. For me, the results never fail to put the previous 12 months in perspective. After witnessing the unprecedented explosion of enterprise technology development last year, I've been particularly excited to see how the 2015 Technology of the Year Awards would play out.

As it turned out, some pretty bold themes emerged from this year's selection of 32 winners. First and foremost is the prevalence of open source, which has become the engine of enterprise technology development. No less than five Apache big data and NoSQL projects made the roster this year: Cassandra, Hive, Hadoop, Spark, and Storm. In addition, you'll find that nearly all of this year's winning programming languages, tools, and frameworks are open source. The plain fact is that open source licensing fosters experimentation and is accelerating both technology development and adoption cycles.

The other thing that jumps out of this year's batch is the quantity of truly new stuff. It's hard to believe that Docker saw its 1.0 version just last June, given the huge amount of activity swirling around its disruptive Linux container model. Another stunning example is Apple's Handoff technology, which debuted with Apple's iOS 8 and OS X 10.10 Yosemite last fall -- and which Executive Editor Galen Gruman describes as the first example of "liquid computing," where personal tasks follow you from device to device.

Two other pieces of extreme innovation came rocketing out of nowhere: Famo.us delivered an ingenious JavaScript framework that includes 3D layout and physics engines, erasing the line between native and JavaScript app performance. Plus, Splice Machine performed the singular trick of offering ACID transactions on top of Hadoop, combining the scale-out advantages of NoSQL with the traditional benefits of a tried-and-true RDBMS.

Last but not least, one winner will remind you that 2014 was the year Microsoft changed forever: Office for iOS. Its March debut may have represented just one of many changes in Redmond, including Microsoft's newfound affection for open source, but seeing Satya Nadella demoing Office on an iPad was a seminal moment for just about everyone.

I have a feeling this year is going to be just as interesting. In the coming months, you can rely on Doug and his stable of insightful reviewers to put it all in context.

Read More News :- Techies | Update
1/27/2015 02:09:00 PM

From bad to worse: Down the rabbit hole

A series of cascading design woes threatens to overwhelm what should be a simple setup.

Sometimes, you run into a series of uncoordinated, inexplicable design decisions that defy logic and reason, create a larger problem — and can only be dealt with via another questionable design decision.

Case in point: I was recently asked to help figure out an odd problem with a small-business network and a networked automation and security system. The automation and security system controlled lights, provided intercoms, ran security cameras, the works. It did not, however, allow for static IP information to be configured into the controller. The only way the system could function was via DHCP, full stop.
This by itself was thoroughly bizarre. From an architectural point of view, this was a fundamentally flawed design because it made the entire system dependent on other systems for normal function.

You may assume that a network of any size will have DHCP services present and to provide the ability to use those services, but to completely omit the capability to set specific IP information on such a critical component is unconscionable. Further, control of the system via mobile or desktop application might work via broadcast when on the local network, but if you want to control or use the system remotely, you must connect by IP address — which you cannot statically set.

Typically, systems like this (and most network devices of any type) use IP stacks developed by others. Whether the foundation is Linux, VxWorks, BSD, or what have you, IP stacks are among the most baked-in code available. There's no excuse not to support a basic, fundamental feature like a static IP address. I won’t even go into the fact that the Web UI does not offer authentication, or the suggested remote access method is to open a hole in the firewall directly to the controller. The mind boggles.

Enough about that — there are ways to mitigate such problems. For instance, every major DHCP server has the capability to set address reservations, which ensure that a specific host will always receive the same IP. At least then we can allow remote access via VPN from a mobile device and connect to the UI at a known IP address that is reliably delivered by the DHCP server. However, that wasn’t an option either.

The Cisco ASA5505 firewall at this installation, which functions as the DHCP server, handles many tasks well. It terminates IPSec VPNs, offers SSL VPN access, performs content inspection, and generally upholds Cisco’s solid reputation as a leader in the networking world. What it does not do, however, is provide any form of DHCP reservation capability — so simple, yet unachievable, in a $600 firewall with a Cisco name on the front.

It might have been as simple as using existing static ARP declarations within the ASA code to cause the DHCP service to assign the same IP address to a requesting MAC address, but no, that’s a nonstarter. You simply cannot do DHCP reservations with an ASA5505. Back to the drawing board.

Next on the list of possible solutions: Use one of the Apple AirPort Extreme wireless access points as a DHCP server, as they support DHCP address reservations. In this network, however, the AirPorts were used as access points only, not routers. They were simply bridging the wireless and wired network, not providing NAT or routing services (a fully supported configuration in the AirPort).

Unfortunately, that meant DHCP services were unavailable. There was no way to configure one of them to provide DHCP while acting as a bridge. Further, even as a stand-alone DHCP server, the AirPort would always want to deliver its own LAN IP address as the default route, rendering that method unusable. Back to the drawing board (again).

To recap, we’re simply trying to cause a networked device to be consistently available at the same IP address. We cannot assign it statically on the device. Several possible DHCP servers on the network are unsuitable for the task for one reason or another.

What should have been the work of a few minutes with existing hardware turned out to be impossible. The network at this particular site does not have any local server-class systems — they're either in the cloud or at other locations. We can’t simply provide DHCP services from a file server or a similar item. Back to the drawing board (once more).

Ultimately, I picked a Buffalo router running DD-WRT for $60. I disabled the wireless and WAN networking and set it up as only a DHCP server. It provides address reservations like a champ and sits alongside the Cisco ASA5505 with a single cable run into the LAN.

I am certain that at some point down the road someone will look at that setup without any prior knowledge and ask why on earth is this router/firewall plugged into the network like that and what could it possibly be doing. When they’re told the Buffalo only does DHCP, the questions will intensify.

The lesson here is that bad design decisions force further bad designs. Also, if you’re adding networking to anything, make sure you support the full stack. Not doing so should be a felony. 

Read More News :- Techies | Update