Breaking

Monday, September 29, 2014

9/29/2014 05:26:00 PM

Surface Mini saga portends end of Windows RT

Recently revealed details about Microsoft's aborted Surface Mini are a likely indicator that Windows RT is headed the way of the dodo.

 Microsoft

If recently revealed details about Microsoft's aborted Surface Mini are any indicator, Windows RT is likely headed the way of the dodo. Yesterday Brad Sams at Neowin wrote about his unfettered encounter with the Surface Mini. It's not clear how he came to get hold of the machine, which Microsoft apparently killed in May, immediately before its scheduled release. His source's only restriction: "I was told that I could not post any photos of the device, for now."

Sams' glowing review basically confirms everything we expected back in May -- a sub-8-inch screen running Windows RT on a pokey Qualcom processor -- except Sams also mentions OneNote integration and a "fantastic case" with a clip for the included Surface Pro 3-style stylus.
Why did Microsoft kill the Surface Mini? It was lined up and ready to go, with stock on hand and a spot on the Surface Pro 3 roadshow. Nobody knows about the cost, but it's likely that Microsoft was going to price it at a point slightly (or greatly) above the competition, as it has with all Surface machines. If Sams' glowing review is any indication, it would have been met with adoration by Microsoft loyalists, and at least toleration in the market at large.
Hard to say if Microsoft could've sold any of them, but that has not always been a prime concern when it comes to the "devices" part of the old "devices and services" strategy.
As best I can tell, Microsoft killed the Surface Mini because it didn't want to put (and support) another Windows RT machine out in the ecosystem.
That looks like a good call. Even though Microsoft dropped the price of its Surface 2 back in August, the machines are still readily available in the Microsoft Store and marked down from $449 to $349 -- for a limited time only, of course.
With many dozens of machines listed in the Microsoft Store, only two run Windows RT: the stalwart Surface 2 and the Nokia 2520, heavily discounted, both of which could be considered long in the tooth.
While Microsoft and the rest of the world have moved on to vastly improved architectures, the only surviving Windows RT tablets seem stuck in the last century.
It looks to me like Windows RT is headed the way of the dodo -- and not a minute too soon. Time to ring out the old Windows RT and the old Windows Phone, and ring in the new ... just plain Windows?

 

9/29/2014 05:24:00 PM

A closer look at CIO salaries, bonuses, and perks

Dave Barnes began his tenure at UPS as a part-time package handler in 1977. Today he’s CIO and global business service officer at the $55 billion shipping and logistics company.

He’s not the only CIO with impressive corporate longevity.

Deb Butler started at Norfolk Southern in 1978 in an accounting role, and today she leads the railway’s strategic planning and IT initiatives. Filippo Passerini joined Procter & Gamble in 1981 as a systems analyst in Italy and rose through the ranks to his current position: CIO and president of the company’s global business services organization.

Each of these veteran IT leaders is included in our analysis of CIO compensation. There are also several recently hired tech chiefs, including Sears CIO Jeff Balagna, who joined the retailer in 2013; JCPenney CIO Scott Laverty, who was hired in 2012; and AutoZone CIO Ron Griffin, another 2012 hire.

For long-timers and newcomers alike, finding out how much CIOs really earn is not easy to do.
9/29/2014 05:23:00 PM

How to set the Internet back 30 years in a few easy steps

The perfect storm of the Comcast/Time Warner Cable merger and the Net neutrality fight will decide our future for decades to come.

telephone operators 

The United States is at a point now where we’ve somehow managed to survive the near-total destruction of our economy by a few obscenely wealthy bankers, yet made sure that they and their banks did not receive any significant punishment or new regulations. It was a close one, but by gum, they managed to make a ridiculous amount of money in the process.

Now it’s on to the next cliffhanger: whether or not we will destroy the nation's innovation and communication by handing the big ISPs either of their two huge wishes. The first wish is to allow Time Warner and Comcast to merge into one massive company that would control everything from last-mile access to content providers like NBC, Universal Studios, and a massive list of other media; the second is to essentially eliminate regulation on all data services they provide.
I doubt the U.S. economy can survive if both wishes are granted. We’ll be hard up for decades if either come to pass, but both? That’s probably the end of the line.
It was a scant 30 years ago that we sliced AT&T into pieces and removed a telecommunications monopoly from power. Prior to that, there were no other options for phone service, rates were whatever they were, equipment had to be rented from the provider, and customer service was pedestrian. Innovation from outside AT&T along telecom lines was next to nothing. Bell Labs was a shining diamond in this midst, but it was the R&D arm — the business side was a black hole. The tale of the answering machine’s birth in the 1930s and subsequent burial for several decades provides some insight there.
This is the kind of monster we've already allowed to rebuild in the United States in the form of Comcast and Time Warner separately -- yet without the regulation that AT&T functioned under. They openly collude to remain uncompetitive with each other, marking out territories for service that the other will not enter, and even trading territories when convenient to both parties. Then they have the unmitigated gall to claim they are indeed competing. Furthermore, they've resurrected Nixon’s Silent Majority to back up this claim. It's a theater of the absurd.
To put not too fine a point on it, we already have big problems with collusion and monopolistic practices in Internet service in this country without allowing the two major nationwide ISPs to merge into a massive, unholy, monolithic beast. To anyone with any sense of what dealing with these companies is actually like, we should be actively looking toward fostering more competition by opening up a free market in Internet services allowed under Title II common carrier status. This has already succeeded in many countries, but we are somehow heading in the opposite direction.
Meanwhile, in major markets where there is competition, we’re seeing massive innovations. In mobile, we now have data service speeds that put Comcast and Time Warner Cable’s wired data services to shame, but are hamstrung by data caps. However, in the past seven years we've seen huge leaps in mobile technology and in smartphones in general. We've gone from flip phones to the iPhone 6 in that time. Meanwhile, Time Warner is handing out the same horrible DVR set-top box it introduced in 2004 and charging more rent for it 10 years later.
We are only 30 years removed from the breakup of AT&T. No reasonable person could claim that we’ve forgotten the reasons why it was necessary -- they can only claim that the lobbying and vast sums of money being spent to buy legislation is paving the way for this debacle to succeed. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

At precisely the same time, we’re faced with the real specter of a tiered Internet, which would be a huge win for the ISPs and the ISPs alone. There is no upside to a gated or tiered Internet for the consumer -- there is only more money to be made by the ISPs, which are already enjoying substantial margins on publicly funded investments in their infrastructure. AT&T snapped up $100 million from the FCC recently and wants to deploy substandard service for that money in order to maximize the return.
The truth of the matter is that through the Universal Service Fund and other avenues, taxpayers have built substantial parts of the Internet's infrastructures in the United States. These infrastructures are then sold back to us at exceptionally high margins and ever-increasing rates by companies with no fear of competition and no real regulation.
If that wasn’t enough, those same companies want to merge, then be allowed to control who does or does not reach their customers via the Internet. This would make them arguably the most powerful corporation in the United States. They could quite literally decide what companies succeed or fail simply by (legally) delaying or dropping packets, or demanding a massive extortion payment for access.
That is far worse than anything AT&T was capable of doing, and you can be certain there’s nothing like Bell Labs within Comcast or Time Warner Cable. No good can come from either situation if the companies get their way. At this point, if Comcast and Time Warner support an idea, that idea is bad for U.S. citizens, bar none.

 

Thursday, September 25, 2014

9/25/2014 05:31:00 PM

The care and feeding of a rockstar developer

Whether you're a hiring company, an educator, or a developer, find out what you're up against in the war for programming talent.

The care and feeding of rockstar developers Deep Dive
Developers rule in the current tech landscape and have their choice of prime positions at any number of major companies. They appear to be on a promised path, with companies fighting for their services. The world is their oyster, so it seems.
Not so fast -- coders may be in high demand, but hiring companies and educators hold some of the keys as well. For example, before they can land an interview, aspiring programmers must get the proper training first, with code academies going up against prestigious universities. Once they land a job interview and perhaps even an offer, a programmer has to consider the trade-offs of working for a sprawling tech giant or a smaller, more intimate firm. That easy path has suddenly sprouted detours.
In this special report from Silicon Valley, InfoWorld traces the road and its various off-shoots for developers in today's job market:
  • Hiring companies reveal their criteria for job candidates, including who they turn away
  • Smaller firms lay out their employee incentives against the deep pockets and endless perks offered by Apple, Google, and Facebook
  • Code academies and the questions they pose to developers and companies alike -- namely, their benefits over traditional educational institutes and how their graduates fare in the real world
The war for developer talent is on, but the field might be more level than you think. If you're a developer, find out what companies are looking for, what you can expect, and how to prepare. If you're hiring, see what your competitors are dangling in front of candidates and how you can make your case to developer talent. Get the lowdown on the battle for programmers in InfoWorld's downloadable PDF, "The care and feeding of a rockstar developer." Download it today!

 

9/25/2014 05:24:00 PM

Hortonworks reignites Spark to boost Hadoop

Hortonworks puts its development expertise into bolstering Spark, Hadoop's in-memory processing system, to craft a next-generation tool.

hadoop elephant logo
Hortonworks bills itself as a pure open source play version of Hadoop, both in terms of what it distributes and how it invests resources back into Hadoop's core projects. In a blog post today, Hortonworks announced plans to plow its resources back into Spark, the Hadoop streamed-data, machine-learning, and in-memory data processing system.
In the post, Hortonworks says it has "outlined a set of initiatives to address some of the current challenges with the technology that will make it easier for users to consume as part of the completely open source Hortonworks Data Platform."
The task falls in two basic categories. First is more deeply integrating Spark with YARN, the technology created to replace MapReduce so that Hadoop applications can run more efficiently in parallel. Spark runs on YARN, but this "leads to a less than ideal utilization of cluster resources" in the current version of Spark, according to Hortonworks, "particularly when large datasets are involved."

Spark's big selling point is in-memory processing, so the planned work involves Spark's use of memory-handling features built into YARN. One such feature is node labels, which allows Spark applications to be tagged, so they can be automatically processed on nodes in the cluster where memory is in abundance.
hortonworks spark sept 2014
Spark provides in-memory and real-time processing for Hadoop. It works with the YARN framework, but its integration with YARN is rudimentary.
The other major category of improvement covers general Hadoop issues: security, governance, and operations. Security on Hadoop has been catch-as-catch-can for much of the product's lifetime, but a new Apache project that entered incubation earlier this year -- Apache Argus -- addresses it in a consistent manner.
Argus did not start as a community initiative; it's the open-sourced version of a commercial product, XA Secure, that Hortonworks acquired and transformed into an Apache-hosted project. The idea, as Hortonworks explained earlier this year, is to provide a centralized way to define and enforce security policy across Hadoop and all its components. This includes access controls down to the folder and file level in HDFS, and to the table and column level in Hive and HBase. But don't expect automatic Argus integration -- this project has a long road ahead for Hortonworks and everyone else contributing to the Hadoop ecosystem.
Other improvements include better debugging facilities for apps that use Spark, and integration with a YARN feature called ATS (Application Timeline Server). ATS metrics can be used for further debugging and performance improvements of Spark apps.
Spark is hardly the only in-memory processing solution offered for Hadoop, although much of the competition is proprietary. Pivotal, for instance, has an in-memory analytics system, Gemfire XD, that recently upgraded to a new version. It has some similarities with Spark, but it's aimed at those coming from a traditional database background, and it's a closed source offering. Cloudera has its own commercial distribution of Hadoop, albeit one that comes bundled with Spark instead of using a closed source substitute.

 

9/25/2014 05:20:00 PM

Mobile and PC management: The tough but unstoppable union

One day, you'll manage all client devices from a central policy console, but it won't be a fast or easy journey.

You know that a trend has peaked when the establishment jumps on board. That's happening in the world of mobile management, pioneered years ago by niche companies such as Good Technology and Zenprise and startups like MobileIron and AirWatch. Now, establishment companies such as CA Technologies, Citrix Systems (which bought Zenprise), Dell, EMC VMware (which bought AirWatch), IBM, and Microsoft are aggressively pushing their mobile management tools.

Just as the establishment is getting into mobile management (aka MDM), the field itself is poised for a shift away from mobile only. Tablets, both the category-defining iPad and the "deconstructed laptops" promoted by Microsoft and other Windows device makers, are both like smartphones and like laptops. For some people, they replace laptops; for others, they supplement them. In any event, the lines between computers and mobile devices are blurring.

Even where there are clear divisions, users are working with multiple devices. Suddenly, any separation on the management side gets hard to keep separate in reality -- password, access, and other policies overlap hugely, no matter if the tools don't.

That's why MDM is shifting away from mobile to encompass anything and everything a user might access: smartphones, tablets, computers, computers, even cloud desktop services. Some are personally owned, some are work-owned, most are mixed-use in practice. They cover a range of operating systems: multiple versions of Windows, OS X, iOS, and Android for sure, perhaps Linux, Windows Phone, Chrome OS, and BlackBerry OS as well.

But getting to that state of universal client management is not easy. Fundamental technology differences exist on these clients, affecting what can be secured and managed and how it can be secured and managed. Still, vendors are moving in that direction because, they say, large businesses have decided that in the not-too-distant future they would like to end the separate PC and mobile silos and manage devices collectively.

 

9/25/2014 05:14:00 PM

Dump your passwords! 8 security and identity breakthroughs

While the digerati drool over toys and services for the rich, innovation in manufacturing is creating jobs and wealth for the companies that get it.

kuka industrial robots ir
OK, Google -- Trimble will see you a driverless car and raise you a driverless tractor. Smart watches? How about a smart HVAC system that diagnoses itself and sends the data to the cloud?
At a time when Silicon Valley innovation seems likes it’s all about making life easier for overprivileged hipsters -- Alfred, I’m talking about you – tech companies are quietly developing lucrative, new lines of business focused on manufacturing.

Predictive analytics and data stores are being used by customers of Teradata to manage supply chains and enable just-in-time manufacturing of everything from golf clubs to automobiles. Meanwhile, Verizon attaches communications devices to sensors in the farmers’ fields that monitor temperature, moisture, and more and sends that data to the cloud.
There is, of course, a long history of technology penetrating factories and farms. That’s not news. What’s different now is the ability to use the cloud, advanced positioning systems, 4G LTE networks, big data applications, and even virtual reality to make manufacturing far more efficient.
I got a taste of that real-world innovation this week at an event hosted by Verizon and (believe it or not) the National Association of Manufacturers, dedicated to the proposition that technology can make American manufacturing more competitive, create real jobs and real goods, and make money for the companies that get there first.

The tractor of the future does more than drive itself

Trimble is a company you probably don't know. It started using positioning technology long before the first GPS satellites went into orbit. Now it’s a $2.2 billion business with customers in agriculture, heavy construction, and transportation.
Trimble’s positioning technology is so precise, it can direct a tractor to plant seeds within a few centimeters of where the farmer wants them, says Doug Brent, a vice president at the company. That self-driving tractor doesn’t cruise around with no one in the cab -- there’s always a worker on board. And you won't see it on a small, family farm.
Finding an optimal route through the huge fields favored by agribusiness saves time and fuel, while also minimizing the amount of soil compacted by the huge machines, Brent says.
Grading a road can be even more complex because it involves a third dimension. Trimble’s technology not only controls the path of the grader, it allows the machine to cut exactly as deeply as it needs to in the first part of the operation, then compress the paving material to precisely the right height.

The air conditioner that knows when it needs servicing

Selling manufactured goods is often a one-shot deal: Sell them and say good-bye. But Daikin, a company that makes huge heating and air conditioning units, has found a way to make its equipment more efficient and add an ongoing revenue stream. Sensors built into the systems monitor vibration, operating temperatures, and so on. They then send that data to the cloud, giving Daikin technicians a heads-up on when the units may need repair.

Armed with that information, Daikin techs schedule maintenance calls when needed -- and skip calls that are unnecessary. The result: more revenue for Daikin, along with greater energy efficiency and fewer headaches for clients.
Similar -- but much more sensitive -- monitoring equipment is being used in a pilot project by Intel, says Shahram Mehraban, who heads the chipmaker's energy and industrial verticals group. Monitors on ultra-expensive fabricating tools are used to head off problems that could affect yields in the company’s fabricating plants, a process known as predictive maintenance.

Big data and bespoke golf clubs

If you don’t play golf, you probably haven’t heard of Ping, a maker of high-end golf clubs. Its clubs are made to order, and the company promises delivery within two days. Keeping that promise requires a sophisticated supply chain. Point-of-sales devices collect information at pro shops and other retail spots, and that data is then stored in SQL databases sold by Teradata.
When combined with predictive analytics software from SAS Institute, Ping has a good idea of what types of clubs golfers are likely to order in the near future. Thus, it can keep the appropriate materials close at hand, says Andrew Coleman, a Teradata area director.
Chipmaker Freescale uses Teradata software and databases to track semiconductor manufacturing, so it can trace defects back to a particular machine and operator.
Manufacturing, says Coleman, now constitutes 15 to 20 percent of the company’s business -- more than double what it was five years ago.
Verizon, meanwhile, has a 200-member team solely dedicated to its manufacturing business. Customers typically use Verizon devices to collect data and send it to Verizon’s cloud, where it’s stored and analyzed. One example: a home health care device that collects patient data, then transmits information to doctors and insurers via a built-in radio.

Manufacturing is a multi-billion-dollar business and a key part of Verizon’s move away from its traditional reliance on landlines and other legacy businesses, says Scott Allen, a managing director in the company’s enterprise solutions group.

Hey, VCs: Focus on technology innovation that matters

Digital technology isn’t going to revitalize the Rust Belt and bring back millions of long-lost jobs overnight. But aided by Silicon Valley, the U.S. manufacturing sector is becoming more competitive: A 2013 survey of CEOs by consulting giant Deloitte showed that significant numbers plan to build their next factory in the United States, says Allen.
Alfred, by the way, was the first-place winner at the TechCrunch Disrupt conference. It’s a service designed to make it easier to hire minions who will take care of chores (say, laundry) you’re far too busy and important to handle yourself.
The judges included a bevy of Silicon Valley venture capitalists who get excited about silly stuff like that these days. Me, I get more excited about self-driving tractors.

Wednesday, September 24, 2014

9/24/2014 12:08:00 PM

How to survive the data explosion

IT organizations everywhere are racing to deal with the onslaught of petabytes. Here's how to meet the challenge

IDC estimates that enterprise data doubles every 18 months. That's an astounding statistic, but somewhat difficult to wrap your head around. A simple analogy may help.
Let's say you're an avid movie buff, and when the American Film Institute's top 100 DVD collection came out in November 1998, you were one of the first to buy it. A collection of 100 DVDs is large enough to be impressive, but small enough to browse easily and find something you want to watch. Weighing in at around 28 pounds and taking up about four feet of space on your bookcase, even the most cramped NYC loft is likely to have space for it. Best of all, "Apocalypse Now" is only a quick 30-second visual search away from your DVD player.
Now, let's apply IDC's enterprise data growth stat to your primo collection. After doubling every 18 months, in November 2015, your collection would have grown to more than 200,000 DVDs weighing over 20 tons and taking up nearly two miles of shelf space. Unless you kept the DVDs scrupulously alphabetized, finding the one you want could take hours. Your collection would have grown to such a massive size, it would almost be useless. It would be a ball and chain dragging behind you until you give up and get rid of most of it.
That's precisely what's happening with our data. Personal, corporate, governmental — it doesn't matter. We're keeping and maintaining way more data than we can possibly ever use. The fact that an 18GB disk available in 1998 is roughly the same size, weight, and cost as a 4TB disk today only obscures the problem and makes us lazy about policing our data growth.
We may be able to store all that data, but when we lose the ability to manage and exploit it effectively, its value decreases. As a result, many businesses are spending more and more time and capital to store data that's worth less and less to the business. Data growth is unavoidable. But it must be accompanied by data management policies that ensure the data created and retained is of real and lasting value.

 

9/24/2014 12:03:00 PM

With Ellison's exit, Oracle can finally do the cloud right

Oracle's cloud strategy faces one fewer hurdle as Larry Ellison steps down as CEO.

 

Larry Ellison is stepping down as CEO of Oracle. The news rocked the industry last week. When I heard, I was sitting in a meeting and thought to myself: This is where I was when Larry Ellison resigned.

Don’t cry for Ellison; he’s stepping up and over to become executive chairman and CTO. The new co-CEOs, Safra Catz and Mark Hurd, had been co-presidents reporting to Ellison. Now they report directly to the Oracle board, which Ellison oversees. That’s just weird.
Why did Ellison step down? Most observers agree it was time for a change at Oracle, after 37 years with Ellison as CEO. Perhaps the board agreed. It’s no mystery that Oracle has had difficulties with emerging technology, and the company appeared to be stuck in the past. Cloud computing seems to be one of those emerging technologies Oracle has struggled to understand and accept -- Ellison long pooh-poohed the notion, and his company's eventual "adoption" seemed more lip service than true embrace.
I’ve not been a fan of Oracle’s movement into the cloud. For the most part, Oracle doesn’t get it, beyond some SaaS offerings. By now, I would have expected Oracle to lead database as a service, PaaS, and IaaS, but most of its efforts to date seem phoned in.
Look at where Oracle is today and where it needs to go. Maybe Oracle's board believed it was time for new leadership. But the new leadership is the old leadership. We’ll see how that goes.
Oracle needs to take hard turns to get cloud computing right. It's made a few good moves, but not enough. Adapting to the cloud seems to be a hard slog for Oracle, so it's not even keeping up with the rest of the industry. Moving to the cloud requires not only a technical change but a systemic culture change as well -- where I suspect Oracle is struggling.
Oracle is about big deals, big agreements, and big software. The cloud is about selling software as a utility, which Oracle may find a hard time doing. It’s the same sort of business model shift that moved people from regularly scheduled meetings with Big Blue’s sales reps to clicks on a credit card in the App Store.
Oracle may have seen the writing on the wall, and Ellison was more in the way than helpful, so perhaps he’s taking a more tactical role now. There's no shame in that: After 37 years, nobody can call Ellison's tenure at Oracle a bad run.

 

9/24/2014 11:59:00 AM

NPM 2.0 courts enterprises with Node.js module management

The Node.js default package management system hits version 2.0 with performance improvements and enterprise-friendly features.

 Node.js programming developer

NPM, the standard package manager used with Node.js, enjoyed a 2.0 release this past week. Outfitted with new features and fixes, its release process has also been revised to satisfy both those who want to use Node.js in a production environment and those who want to engage in a little Node derring-do.

According to the NPMJs.org blog, the single biggest addition to NPM is a feature called scoped packages. The idea, courtesy of Node.js enterprise users, is to make management of private Node.js modules as easy as managing modules from the public NPM registry. Modules could be then "scoped" to a specific organization so that private code in enterprise settings doesn't require extra management and won't clash with public versions of modules. The blog post noted, "[Scoped modules will] also play a major role when private modules come to the public NPM registry."
Many of the other improvements in version 2.0 focus on making NPM more reliable, particularly regarding the concurrency and race-condition issues that have appeared over time in NPM. (Node's single-threaded architecture doesn't prevent race conditions entirely, as Chris Baus has explained.) Another change, local path support for packages, allows the use of local or relative paths to packages, "which is helpful for testing."
The NPM project also recently switched to a new release process in which two distinct versions of NPM debut simultaneously. The version tagged npm@latest is for production use; the version tagged npm@next is the bleeding-edge edition for those interested in providing test feedback or experimenting with features.
NPM, Inc., the company that sponsors development of NPM, was founded earlier this year by former Node.js maintainer Isaac Schuleter and was originally hatched to deliver more enterprise-specific support for Node through further development work on NPM. So far, that's resulted in NPM Enterprise (npmE), a workflow and deployment solution for Node.js outfitted with compliance, security, and management features. According to a post describing NPM Enterprise's road map, the product still lacks a number of features, such as a formal administration UI or a native backup system, although plans have been made to deliver those features at some point.

 

9/24/2014 11:48:00 AM

Run anywhere again: Java hooks up with Docker

Zulu JVM is the first Docker offering to be officially certified as Java-compliant


java 

Azul Systems is offering its certified Zulu JVM via Docker, enabling developers to more quickly package and distribute applications.
Applications can more easily be redeployed on servers or made available on public clouds using Docker. "You configure it once, and once it's configured, it's very easy to [roll it out] in multiple places," Azul President/CEO Scott Sellers said in an interview. Although others have offered Java via Docker, Azul says its open source Zulu JVM is the first Docker-based Java offering to be officially certified as Java-compliant and fully supported. "This is really needed in order for enterprises to deploy Java on Docker in real production environments," Sellers said.
Azul's move acknowledges Docker's growing acceptance, according to analyst Jay Lyman of 451 Research. "It is another sign of Docker's progression and appeal among more mainstream and larger enterprises, where Java is often more prevalent and significant," Lyman said in an email. "As testament to the rapidly forming Docker ecosystem and market, there are usually many tools, vendors, and options for development, management, and orchestration. In the case of enterprise Java, there are different software tools and vendors associated with Docker, including Azul and Red Hat." 
Docker is the hot technology of the day. A Datastax official recently lauded the partnering of Docker with Java, and Google has made its new Dart language platform available via Docker images.
Although Azul is going public with its Docker JVM today, the technology has been available for a couple weeks. It can be accessed on the Docker registry by searching on "Zulu," "OpenJDK," or "Azul." Azul's JVM supports Java versions 6, 7, and 8 via Docker, with support for Java 9 planned when that version is available. Zulu on Docker has passed official OpenJDK TCK (Technology Compatibility Kit) testing, Azul said, and is based on the OpenJDK open source implementation of Java.
9/24/2014 11:12:00 AM

Create your own 'dirty dozen' threat list

Which security events should you worry about most? Everyone has different vulnerabilities, so here's how to prioritize.

security target 

Back in the late '80s, I helped maintain the infamous Dirty Dozen malware list, which was created by Tom Neff and later updated by Eric Newhouse. The Dirty Dozen list originated because (cue the nostalgia) we had only a handful or two of malware programs to worry about. Neff's original list contained mostly Trojans, although early Apple viruses made it as well.

The number of malware programs quickly became multiple dozens, then exceeded 100. Neff and Newhouse gave up on maintaining the list because their hobby was taking up too much of their free time.
Today, I’m a big believer in each organization maintaining its own dirty dozen list, but instead of listing malware programs to be worried about, it should list the top dozen security events you look out for.
You say you're already on the lookout? Probably so: Most enterprises that use event log monitoring to keep an eye on their networks end up doing too much monitoring. For years, the Verizon Data Breach Report has told readers that most data breaches could have been caught by monitoring tools, if only the owners had been looking. It’s a problem of too much noise and not enough thoughtful planning.
The average enterprise generates literally millions to billions of events and collects them in a centralized repository. Companies even brag about how many petabytes of storage they've purchased to hold all their event log results. Although I’m smiling on the outside, I’m always mentally rolling my eyes on the inside. Tell me that you’re collecting billions of events and I know you're not doing a good job detecting evidence of malware or malicious hacker activity.
Talk about the the proverbial needle in a haystack. Most companies would be far better off defining a handful or two of events that clearly indicate malicious behavior and throwing away the rest. Actually, the best strategy is to let each endpoint device generate as many events as it likes -- but forward and alert on only a dozen nasty ones. That way, if you need the historic detail, you can go to the endpoint device and get all the minutiae you need. But don’t forward millions or billions of events to a database and hope you can create order from disorder (though some regulatory guidelines encourage this).

A better monitoring strategy

A far better strategy is to get the right people into a room for a day and decide on the dozen or so events that the company should be monitoring and alerting on. Instead of trying to define every event that might be malicious, focus on defining what would always indicate maliciousness -- like 50 million guesses against the domain admin password.
Malicious event log messages should be broken down into three major categories:
  • Single events that indicate maliciousness
  • Individual endpoint aggregate events that indicate maliciousness
  • Enterprise-wide aggregate events that indicate maliciousness
Start by defining which events -- even in a single instance -- would indicate maliciousness. A good example is someone trying to log on to a honeypot system. Once a honeypot system is fine-tuned to ignore all the normal logon events and touches that happen to any system on a network, it should forward any previously undocumented “touches” from any system. It’s a fake system, so nothing should be touching it, and if an item makes contact, it needs to be investigated.
Another one-time event example could be a member of the domain admins group logging onto a regular workstation. This should never happen. Domain admins should only be logging on to administrative workstations, domain controllers, or a limited set of other pre-authorized computers. You can tell computers to send alert messages when a highly privileged group member logs onto an unauthorized computer. It’s a great way to catch APT attackers.
Another one of my favorite single events involves installing an application control program, in audit mode only, on servers. Take a snapshot of what is supposed to be running on your server, being sure to create authorized installer rule exceptions, then alert on the unexpected. Servers shouldn’t be getting a whole lot of unexpected executables, and if they do, it needs to be investigated.
Next, define what events indicate maliciousness when they happen to a single object or endpoint. A good example would be a higher than normal number of failed logons against a particular domain admin account. Every week, most users are responsible for one or more failed logons (unfortunately, our passwords and PINs are getting longer and more complex). Every user, or computer, has a certain “rhythm” of bad logons. Maybe it’s two a day, 10 a day, or 50 a week. Regardless, the detection event should kick off when the number of failed logons exceeds the baseline normal by a great amount.
Say that a normal user has five bad logons a day, on average; the threshold alert event should be something like 50 or 100. Don’t set the detection threshold so small that you end up with lots of false positives. Remember, we are defining events that are truly malicious (or at the very least require investigation to rule out maliciousness). Another good example is higher than normal failed logons on perimeter routers.

Lastly, you must define which events collected across the enterprise indicate maliciousness. For this, you’ll definitely need to collect these events in a centralized repository (this shouldn’t take petabytes of storage for any enterprise). A good example is the number of failed logons, in aggregate, across the enterprise. Again, monitor and establish a normal baseline (for example, 5,000 bad logons a day) and set an alert when the threshold is exceed by a large amount (100,000 bad logons). Another good example is higher than normal number of malware detections across the enterprise. Inventory all the sources of event log messages -- firewalls, computer endpoints, antimalware logs, application logs, and so on -- then decide which events should be monitored and alerted on.
Don't try to detect everything that could be malicious. Break the habit! Define 12 events that are guaranteed to mean maliciousness and call it a day. If you can’t narrow it down to a dozen, try two or three dozen, max. Anything beyond that will probably result in more false positives and noise than good detection.
When in doubt, go smaller. I guarantee that you’ll do a far better job at detecting badness than most companies.

 

9/24/2014 11:08:00 AM

iOS 8's iCloud Drive reveals the dark side of empowered users

Apple's iCloud Drive deployment was sure to mess up people's access to documents -- and it did


When editors at a technology website can't figure out how to set up a basic feature, you know something is wrong. I'm a big proponent of users taking on more technology responsibility, not being subject to the whims of the IT priesthood. But when you become shadow IT or a consumerization user, you're taking on a higher burden.

Yet many people fail to do that. And their technology vendors often fail in helping them do that. Perfect example: Apple's iOS 8 upgrade.
Apple's reputation is all about ease of use and empowered users. But when you upgrade to iOS, chances are very high you'll mess up your iCloud settings and end up with a major compatibility break. That happened to the editors of Macworld UK, who lost their ability to collaborate using iWork documents. It also happened to my far less technical sister-in-law and her husband, who both lost their iPads' iWork documents.
In both cases, the problem was user error compounded by lack of sufficient guardrails by the vendor. This is not an iOS 8 or Apple issue, but if it happens in that environment, you know it'll happen everywhere. Both users and vendors need to step up.
The specific issue is that Apple is changing its iCloud document system at a fundamental level; what had been called iCloud Documents is now called iCloud Drive. iCloud Drive overcomes iCloud Documents' limitation of showing only documents for the app currently open, and it works more like Box or Dropbox in exposing all cloud documents to all apps. Apps still get dedicated folders for their documents, for backward compatibility, but the architecture change is so great that if you upgrade to iCloud Drive, you can no longer sync or share with devices using iCloud Documents.
There's a warning about this when you update iOS, but it's confusingly written, and despite the warning, the upgrade process really wants you to enable iCloud Drive. Here's what my sister-in-law emailed me after all her iWork documents disappeared from her iPad:
A pop-up came up when I tried to use Pages this morning. It said something like upgrade to IOS 8, OK. It would not allow me to use Pages till I said OK, and I was afraid to say OK for fear of losing files or other bad consequences. So I waited for Bob to wake up and look at it. He said OK, then answered another question he can't remember, and that's when the damage was done, he thinks.
Some people have reported that the documents eventually come back if you wait long enough after updating the iWork suite to the iOS 8-savvy version. So far, that hasn't happened for my sister-in-law or her husband.
The underlying issue is that Apple has only partially implemented iCloud Drive -- it's available for iOS 8 and for Windows 7 and later, but not for OS X (that will come with OS X Yosemite's release next month). Lots of people who upgrade to iOS 8 won't know that, so they'll be stuck, as you can't reverse the upgrade. OS X users are especially hard hit, but Windows users must also have the right update, which isn't always automatically installed or easy to find.

In my in-laws' case, it's likely their iWork documents never were backed up to iCloud to be recovered -- not to their computers, either. One of the promises of iOS is that you can use your device as a stand-alone system, no ties to a computer via iTunes necessary. My relatives use it that way, without iTunes backups. Their farm is also in an area with poor broadband coverage, so it can be weeks before local documents get synced to iCloud, if iCloud sync is even on (they're not sure if it was before the upgrade). Translation: They're mainly local files.
You can blame all this on stupid users, but the real issue is that such upgrades are significant changes that are presented as easy activities -- click the Update button when prompted and you get a "new" iPad, or whatever. As I said, this fooled the editors at Macworld UK, who aren't as naive and trusting as my in-laws.
The truth is that there's a design flaw in Apple's iCloud Drive deployment exacerbated by the trusting self-assurance the consumerization phenomenon advocates and engenders.
The answer is not to revert to an "IT knows best" strategy. IT can be as blind to such changes as anyone, and the technology it picks is often backward. (Don't get me started on how stupid Office 365 is on a mobile device or a Mac, for example. You can't even update your devices list from a mobile device, much less manage any account settings -- not that Windows-hugging IT even knows that.)
I'm not sure what the answer is beyond everyone taking more responsibility for their part of the context. Ironically, the consumerization trend works against that self-responsibility. Here's an analogy: In my 20s, I knew how to change my oil and spark plugs, and it was a fairly simple process. Today, car engines are highly complex and overstuffed with technology, so fewer people can change these items themselves even if they want to. You open the hood and realize it's become rocket science.
Computer technology is making the electronics we use fantastically complicated in the inside but seemingly simple on the outside. We think the equivalent of changing the oil or spark plugs is no big deal -- even OS upgrades are treated as simple, over-the-air activities.
But such changes are a big deal, so we need to learn more before doing it or hire a professional. Most people today do neither. And we'll all suffer for it.

Tuesday, September 23, 2014

9/23/2014 03:10:00 PM

Getting Inspired: 5 Ideas to Help Refresh Your Ad Copy

Refreshing ad copy is one of those optimization tasks that I have a very bad habit of putting off. For me, writing ad copy requires a certain Zen -- a deep concentration and focused frame of mind. And let's be honest -- when you're trying to manage PPC campaigns every day, it's tough to tap into that creative mojo when you need to.
And sometimes you can struggle coming up with different ways to say the same thing. Or trying to come with a different spin on how to present your product. All within the headline and two 35-character description lines of text. It can be challenging.
But it's important to not to use the same old ad copy. Writing great PPC copy means keeping your ad copy fresh. Even if your ads have a good CTR -- you should always look for ways to make them better.
So how to come up with new ideas for ad copy? Below are 5 ways to get some creative and different ideas:
1) Pay Attention to Commercials
We often find commercials insanely annoying -- but they can be a great way to see how advertisers are messaging their products. Next time you're watching TV, pay closer attention to the commercials and see if you can glean some new phrases or tagline ideas.
2) Check Out Online Advertising Sites
Going to advertising sites can provide interesting insights on the latest ads and creative. Not only do you get updates on the latest in advertising news, but they provide some of their showcased work on their sites.
Below is an example of an ad from Adroit Digital that creatively uses its name in the ad, but also has great tag lines:

Here are some advertising sites that can help inspire you:
www.mediapost.com
www.adage.com
www.adweek.com
3) Look at Competitors
Hey - it doesn't hurt to see what your competitors are doing! In doing a search on "wedding invitations," while incorporating offers such as free shipping and discounts is important, sometimes adding another word such as "stylish" or the phrase "make it yours" can resonate more with a prospective customer.
wedding-invitations

4) Read...
Magazines are a good way to get creative ideas -- check out this example from Dunkin Donuts:
snackWhile the association of a smoothie and a scary snack isn't something most of us would connect together, it does result in a very unique message.
5) Don't forget site links and callouts
When updating your ad copy, don't forget to revisit your site links and callouts. Changing your ad copy may result in your site links and callouts needing to be changed. And incorporating callouts into your message can help create a more relevant and compelling ad.
Are you feeling more energized to write new ad copy?
9/23/2014 03:07:00 PM

Centralizing Location Data: 3 Steps to Local SEO Success

After five years of growth in the north Chicago suburb of Evanston, my company outgrew the space and recently relocated to Chicago's Loop. I was part of the relocation effort and the number of decisions that had to be made to move the SIM Partners' headquarters was staggering and unexpected. We had to find the space, build it out, furnish it, wire it, build out the IT infrastructure, coordinate the move itself and so on. Thankfully, we entered our new headquarters ahead of schedule and, while some finishing touches remain, we couldn't be happier with the space.

http://searchenginewatch.com/article/2371256/Centralizing-Location-Data-3-Steps-to-Local-SEO-Success

Unfortunately, as it does for so many businesses, updating our online location information was relegated toward the bottom of the move's priority list and we had to put off addressing this critical task until two weeks had passed in our new space. As a technology company entirely focused on creating tools that maximize local opportunities online, we understand the importance of maintaining accurate listings and the ins and outs of listing management well. Despite moving only one of our locations, we couldn't make it a top priority. Thankfully, we don't receive much walk-up traffic in our office since we cater to marketers in national-local businesses.
But what about a small business owner or a multi-location brand with hundreds of locations or more to support? If we have trouble prioritizing this critical marketing activity, how can businesses that aren't focused on local online data be expected to do it? Making the updates in a timely fashion is critically important for many of these businesses who need their customers to find them before they can get them to visit and earn their business.
Automation and process can help!
Brand marketers can leverage automation and process to ensure even thousands of locations are kept up to date and visible to potential customers. Whether businesses try to go it alone or license technology to make the task more manageable, the end goals should be the same:
  • Standardize business data
  • Disseminate business data consistently; don't just fix existing information
  • Lay the groundwork for people to successfully find the business when searching, regardless of device, location or time/day
National-local businesses that have locations opening, closing or moving often find these issues exasperated by well-meaning franchise owners or location managers either trying to improve their location's data or ignoring that data completely. Either of these scenarios can lead to data management and distribution being inconsistent with what the local ecosystem wants to see and that means results falter.
Digital marketing pros supporting national-local businesses should consider these steps:

1. Determine where your local business data is currently hosted and take inventory.

Critical data for each and every location includes business name, address, phone number(s) webpage/site URL, hours of operation, categories, products/services offered and areas served. Additional elements worth sharing include staff details, current offers, supported charities, organization memberships and more; once you locate the data, be sure to keep track of it. Oftentimes, this data exists in multiple locations; marketers who notice any of these elements missing in one system should be sure to keep digging for it in other locations. Our clients are often surprised at how much fantastic local data they find but never knew existed.

2. Warehouse the data and put processes in place so this hub is constantly updated.

Warehouse the data in a centralized location and put processes in place so this hub is constantly updated. There are multiple ways to do this, but third party tools can be a good solution if they easily integrate with your internal systems via loaders and APIs. The more locations to manage, the more technology can help automate the process and uphold brand standards.

3. Incorporate tools for effective data distribution.

Incorporate tools into this local data warehouse to push updated information to as many places as possible in a way that makes sense to the ecosystem. Minimally, marketers should focus on three important sets of web properties to keep updated: their own websites; data aggregators, including Foursquare; and Google My Business. Ensuring data consistency in all of these places will lay the groundwork to enable Google to make sense of what is going on in the local ecosystem and generally results in a strong presence for local businesses on the local and local-organic SERPs.
Listing management poses challenges for businesses of all types and sizes. It can be particularly difficult for small- to medium-sized businesses and harder yet for national-local brands, but with a centralized warehouse of local information, automation and process can help. By easing some of the burden to ensure consistency and regular distribution of data, marketers can execute effective local SEO efforts while the competition spins its wheels.
9/23/2014 03:00:00 PM

How to use Keyword Research to Find New Landing Page Testing Ideas

Conversion rate optimization is all about finding the right elements on a page to test. Should you test a new value proposition, image, headline or redesign a page completely?
All of these things are worth looking into when considering what to try but finding the right elements to A/B test can be very difficult.
Today, I'm going to introduce a shortcut for coming up with winning test ideas. Instead of staring at your page over and over again to come up with one or two new ideas, you can use this shortcut to generate a plethora of ideas in a short amount of time.

Introducing Competitive Keyword Research

Competitive keyword research helps PPC advertisers find out which keywords their competitors are bidding on. This can be done using tools like iSpionage, TheSearchMonitor or Adgooroo. Not only do the tools make you feel like the James Bond of search advertising, but you can then use the competitive data to find profitable keywords you aren't bidding on and to identify new ad groups you should create.
It's possible your competitors will send you down a rabbit hole of unprofitable keywords if they don't know what they're doing, but it's even more possible that you'll uncover keyword opportunities you haven't thought of yet.
Think about it this way: When you do keyword research with the AdWords Keyword Planner, you're limited to the number of ad group ideas that you come up with on your own and that AdWords suggests. Oftentimes there are keywords you aren't bidding on that have high volume and low cost per click. You'll catch these types of terms by doing competitive keyword research and learning from what the top advertisers are doing in your industry.
Being able to start with a keyword list that's already working for someone else will shorten your time to profitability. But not only can you use it to find new keywords, you can also use it to spy on your competitors' landing pages in order to come up with testing ideas. Here's how.

How to Spy on Your Competitors' Landing Pages

To begin, you need to choose a competitor to spy on. We're going to use SalesForce for our example and enter their domain into iSpionage to get started.
salesforce-into-ispionage
Conducting a search like this provides a lot of useful information. You'll learn approximately how much SalesForce is spending per month on PPC ads, their top keywords, their top competitors and their top ads, all of which is very useful.
But, as we mentioned before, the most helpful information for CRO is which landing pages your competitors direct their traffic to, which is something we can find out by clicking on the destination URL for an ad.
 salesforce-destination-url
After clicking, you're taken to the exact landing page SalesForce uses for each ad in its account. When we do that for the two ads listed above, we're taken to the following page for the Keyword: "partner portal."
partnercommunity-salesforce-com

We are taken to this page below for the keyword: "sales team tracking software."
sales-tools-and-software-salesforce-com
So what can we learn from these pages?
  • Salesforce uses a very simple design that places the emphasis on the demo sign-up form.
  • They offer access to all of their demos through a single sign up.
  • The offer on the landing page matches the keyword being bid on. Instead of directing their ads to the same landing page, they direct visitors to a page with copy that matches what people are searching for. They also reuse the same template, which saves design and development resources in order to efficiently create landing pages that match the keywords without costing a ton of money or time.
  • They include stats to show how customers benefit from using their product.
  • Trust symbols are placed at the bottom of the page to make visitors feel more secure about giving their information.
  • A phone number is included at the bottom in case people want to pick up the phone and talk with a real human. Yes, some people still like to use the phone.
These are very useful bits of information and reveal that SalesForce employs a number of industry best practices with its landing pages. And as mentioned before, you can use this information to come up with testing ideas for your own site.
For example, maybe you aren't currently using landing pages. Your boss hasn't bought into the idea or approved the budget to design and create landing pages for your products or services. You can create a presentation with screenshots of your competitors' landing pages and explain, "We're getting killed -- they're using custom landing pages for different keyword groups. If we want to keep up, we need to do the same."
Those are some of the main things we can learn from ethically spying on competitors' landing pages without clicking on their ads and making them pay. Although who doesn't want to drive up your competitor's AdEords bill? Ha.
But -- wait, there's more.
When we do a keyword search for "CRM software," we see the following list of top ads.
top-crm-competitors
SalesForce has the top ad listed, but there are four more competitors: QuickBase from Intuit, Syspro, Zoho and Oracle. Let's look at QuickBase's landing page to see what we learn.
intuit-quickbase
Here are some things that stand out from the page:
  • They offer a 30-day free trial compared to SalesForce's offer to view demos.
  • They include a link to learn more about QuickBase in case the information on the page isn't sufficient.
  • They show off the fact that 50 Fortune 100 companies use the product.
  • They also include trust symbols and mention the other award-winning products they've created: Quickbooks, Quicken and TurboTax.
 So what could you test from this page?
  • You could consider offering a 30-day free trial on a landing page in place of your current offer to see if conversions go up.
  • You could consider including a link to learn more to see if that leads to more conversions compared to only offering the information included on the landing page.
  • You could consider adding logos of your top customers to show off your clientele and gain more credibility.
Do you see how this works? You can go page by page to learn the best practices of advertisers in your industry or in industries with effective advertising practices.
The best part about it is that you learn from the tests that companies with big budgets have run. It's possible that SalesForce and Intuit have A/B tested their pages 100 times to get to the version they currently use. If you know an advertiser takes A/B testing seriously, you can piggy-back off of their tests, glean some lessons and ideas and then test those ideas to see if you can tighten your funnel and improve conversion rates.
In summary, you can use competitive keyword research to:
  • Find profitable keywords you're not currently bidding on
  • Improve ad copy by learning what the top advertisers have written
  • Come up with A/B testing ideas by studying your competitors' pages
So go ahead and do it. Conduct some keyword research, write down some notes, take some screenshots and show your boss how much you've learned about the competition and how you can use the insights to improve your company's conversion rates. Any comments or other examples of how to generate landing page testing ideas are welcome.
9/23/2014 02:55:00 PM

Bing Ads Reveals Dynamic Sitelinks

Bing Ads has revealed a new feature, "dynamic sitelinks", which creates annotations for ads that haven't specifically set up sitelink extensions.
dynamic-sitelinks
From Bing's blog post:
Dynamic Sitelinks is another way to help your potential customers evaluate what your web site has to offer prior to clicking through, which saves them time and provides you more relevant customer opportunities. However, dynamic sitelinks is an ‘annotation’, which means that Bing Ads dynamically creates the content for you from content already on your page.
Advertiser's ads may be a candidate for dynamic sitelinks if they haven't set up sitelink extensions and their display URL has deep links information available, which the Bing algorithm uses to surface the annotations.
The report says internal Microsoft data shows that dynamic sitelinks can increase click-through rates up to 14 percent.
In recent coverage of SEW's sister event ClickZ Live, dynamic sitelinks were discussed in a PPC session, where speaker Diane Pease advised that "advertisers create their own sitelinks to give themselves more control of their message."
In July, Google AdWords unveiled a similar feature, and in an article for SEW, Larry Kim discusses that topic at length.
The annotations have already begun to roll out to all U.S. advertisers, says the announcement, and will complete "sometime before the holidays." 

9/23/2014 11:21:00 AM

Alice is killing the trolls -- but expect patent lawyers to strike back

The wheels of justice spin slowly, but they seem finally to be running software patents out of town.


Open source software developers rejoice: Alice Corp. v CLS Bank is fast becoming a landmark decision for patent cases in the United States.
The Court of Appeals for the Federal Circuit, which handles all appeals for patent cases in the United States, has often been criticized for its handling of these cases -- Techdirt describes it as "the rogue patent court, captured by the patent bar." But following the Alice decision, the Court of Appeals seems to have changed.

[ See which open source projects are off to a great start in InfoWorld's top 10 new open source projects of the year. ]
In case after case, the Court of Appeals is using Alice to resolve patent appeals. In each case so far, the Court of Appeals has found the software patents in question to be invalid. Some examples:
  • A huge case involving digital camera manufacturers, retailers, laptop manufacturers, and more was summarily dismissed. The patent troll involved, Digitech, lost use of the patent it was using to shake down multiple victims.
  • Another case involving online Bingo was resolved via the Alice decision.
  • Google won an appeal over patents held by BuySafe on vendor reliability ratings.
As PatentlyO points out, the Alice effect is even reaching to lower courts, saving the Court of Appeals from having to strike down patent findings on appeal. For example, Google successfully challenged patents wielded by would-be troll Walker Digital, having them declared invalid because of unpatentable subject matter.
Alice, and the earlier Mayo decision it draws from, is turning out to be the considerable weapon against trolls I predicted it would be in 2013 when I commented on the federal circuit court's problems coming to a decision and this spring during the Supreme Court's early discussions.
But what does the "patent industry" think of all this?
Broadly speaking, patent insiders see Alice as curtains for many existing software patents. Well-known patent advocate Gene Quinn says at IPWatchdog:
My immediate reaction was that this would be extremely bad for software patents. ... It is now clear that the Supreme Court's decision in Alice fundamentally changed the law and future of software patents, at least those already issued and applications already filed, which cannot be changed without adding new matter.
In a discussion with Quinn that's worth reading in full, lawyer and patent law scholar Professor Mark Lemley agrees:
I think Alice is a real sea change on the patentable subject matter issue. I've heard a lot of folks talk about how Alice doesn't really use the word "software," so it doesn't really change anything, but I honestly think that's wishful thinking. ... I don't think it's all software patents, but I guess what I would say is a majority of the software patents being litigated right now, I think, are invalid under Alice.
All the same, the comments Techdirt makes about the need for the Court of Appeals to be reformed (or, better, to lose jurisdiction of patent appeals to the circuit courts) ring true. Sooner or later patent lawyers will work out how to draft patents that don't get struck down because they added "on a computer" or "on the Internet" to otherwise inadmissible topics. That's what software patent consultant Bob Zeidman told Quinn:
I think they've opened the door for making software patents exactly dependent on the draftsmen's art because as you and I have seen over the years, every time there's a court ruling it just means that you have to word the patent claims differently.
Indeed, Lemley has posited that patent lawyers will return to the previously deprecated practice of explicit functional claiming as a route around the sea change. That would yield a legal landscape similar to the one that the current software giants grew from, according to Lemley: "We may be going back to the world of the 1980s; not only the patentable subject matter world but maybe also in claiming and means plus function claims."
Using functional claims significantly limits the power of a patent, according to Quinn, who calls it "garbage," so perhaps this is a compromise developers could live with. But whatever happens, it's clear that Alice having a much larger impact on software patents than many thought it would. In fact, it's looking, if only for the present, to be cleaning house. That's a welcome change.