Posts tagged huge
OpenStack rules the open-source cloud. Which may simply mean it’s the tallest person in Lilliput.
With Amazon Web Services (AWS) paving the way for enterprises to move their workloads to the public cloud, including in-house apps, OpenStack’s reign as open cloud sovereign may be short (if not nasty and brutish). The open question is whether OpenStack is a “poor man’s vCloud” or whether it actually fills a long-term and growing need for big organizations.
OpenStack’s Billion-Dollar Promise
No one questions OpenStack’s community bona fides. For years it has attracted thousands of developers to the semi-annual OpenStack summits.
It’s not surprising, therefore, that OpenStack would poll really well in popularity contests. According to a new Zenoss survey, 69% of the roughly 400 respondents are using a cloud, and 43% of these respondents are using an open source cloud (e.g., OpenStack, CloudStack, Eucalyptus, etc.).
Among these open-source competitors, OpenStack stands out, with 69% choosing the community leader.
This, in turn, seems to be translating into real revenue.
For example, 451 Research predicts that the OpenStack technology market, which produced revenue of $883 million in 2014, could top $3.3 billion by 2018.
Most of this OpenStack revenue derives from service providers like Rackspace, which means that much of this revenue comes from Rackspace itself. Rackspace projects its OpenStack-based public cloud business will hit a $1 billion run rate by early 2016, though based on current growth it’s unclear how it gets to that number.
For its part, Red Hat got into the OpenStack game in earnest in 2013, but has publicly said it wouldn’t make much OpenStack revenue in 2014. And it hasn’t. But that may change, as I argue below.
Regardless, even $3.3 billion in OpenStack revenue by 2018 simply means that OpenStack will remain a distant third place to AWS (and Microsoft Azure, not to mention Google) forever.
There are good reasons for this.
Clouding The Cloud
After all, according to the Zenoss survey, the top three benefits expected from open source cloud deployments included lower cost of ownership (71.1%), agility (55.6%) and better uptime (46.7%). At least two of those (agility and uptime) are almost certainly more consistently delivered by AWS rather than some in-house team fiddling OpenStack nobs and gears.
One primary reason for shifting to public cloud services is to get away from a cumbersome, IT-driven service provisioning. It’s not clear how OpenStack changes this much. As one person told me, OpenStack is “for IT folks that want to stay on-prem, but fool their execs that they are doing ‘Cloud’.”
Or as Andy Jassy, Amazon’s cloud chief, puts it:
If you look deep into what [private cloud vendors] are offering, you will see that it’s basically an internal data center that is virtualized and has some management tools. Organizations that have private cloud systems will have missed out on all the advantages and benefits of going into the cloud.
That’s hardly a recipe for long-term success, even if Dell’s Joseph Jacks correctly surmises that OpenStack “will be the defacto [infrastructure-as-a-service] fabric for self-service cloud consumers in enterprise IT for some time to come.”
Red Hat To The Rescue?
As such, it’s highly likely that many workloads will stay behind the corporate firewall for the foreseeable future. In such a world, OpenStack’s big proponents can expect to make a lot of money. Foremost among these will be Red Hat.
Red Hat, more than any of the other OpenStack vendors, has a long history of hardening open-source code and selling it to the enterprise.
This, perhaps more than anything else, is what OpenStack needs today. As Gartner analyst Lydia Leong has suggested, OpenStack desperately needs a “core” that is “small, rock-solid stable, and readily extensible.”
She goes on:
There’s much work to be done still, but things are grinding onwards in an encouraging fashion. The will to solve the common problems of installs, upgrades, and networking seems to have permeated the community sufficiently that these basic elements of usability and stability are getting into the core. The involvement of larger vendors has created a collective determination to do what it takes to make enterprise adoption of OpenStack possible, in due time.
In just a few years, Red Hat has gone from zero involvement to top contributor to OpenStack, putting it in a great position to ensure OpenStack gets the “rock-solid core” it requires.
Meanwhile, whether you think private clouds are fake or real, enterprises have been turning to OpenStack to build private clouds, as OpenStack survey data shows. Between November 2013 and November 2014, OpenStack saw production deployments jump considerably, moving from 32% to 46% of survey respondents.
Open source being open source, “production” doesn’t necessarily translate into “revenue” for OpenStack vendors. Even if it did, this increased adoption almost certainly won’t add up to the $5 billion in annual revenue that AWS reportedly already generates.
Still, “eking out” a few billion of revenue from companies too skittish to leave their data centers behind? That’s revenue that Red Hat will gladly take.
Lead photo by George Thomas
View full post on ReadWrite
It’s no secret that Amazon leads the public cloud computing race. The question is by how much.
A year ago Gartner analyst Lydia Leong pegged Amazon Web Services (AWS) at five times the utilized compute capacity of the next 14 largest cloud competitors combined. More recently Technology Business Research ran the numbers and figures AWS is 30 times larger than its next nearest competitor, Microsoft Azure, as measured by revenue.
Either way, the disparity is enough to motivate an Occupy Amazon crowd. The problem for detractors and competitors, though, is that Amazon doesn’t seem to be in the mood to misstep. The only thing that will cut into its lead is someone else catering to developers as well as AWS has, and that doesn’t look likely.
Public Cloud: Big And Getting Bigger
It’s becoming increasingly important to get out in front of AWS. The problem, as noted by Leong, is that the delta between AWS and everyone else is so huge, however you measure it:
Such “scale” advantage isn’t really a matter of data center build-out, she goes on to note, but really is a matter of software. AWS has such an impressive array of developer-centric software infrastructure, which translates into developer services, that closing the gap will be brutally hard.
Even Leong’s report that more workloads are moving to the cloud—to the point that enterprises have started to shift entire data centers over to the public cloud—doesn’t seem likely to cheer up Amazon’s rivals:
Why? Because AWS benefits disproportionately, as network effects drive vendors to focus their cloud attentions on AWS. If you’re a vendor choosing where to host your new service, AWS will nearly always be the first choice. If you’re a student, AWS will be the first cloud you learn, and possibly the only one. And so on.
Early on, while most cloud vendors were fixated on IT, Amazon devoted itself to developers, and has become the default for most developers.
Competing With The Amazon Beast
Competitors have taken notice, and are actively trying to market against perceived AWS weaknesses.
From the private/hybrid cloud side, we have vendors trying to insinuate that it’s expensive to stick with the public cloud. But such calculations completely miss the point, as they focus on cost when really the public cloud is driven by convenience.
And from public cloud peers, we get much the same, with Google and Microsoft lobbing price reductions at AWS. They haven’t worked. Pulling up stakes on one platform to move to another is more than a matter of saving a few dollars. It’s a hassle, one that can only be justified by making the alternative cloud more convenient.
GigaOm’s Barb Darrow asked which one vendor had a shot at displacing AWS, with a broad array of responses. I can’t help but think that most of them are wishful thinking.
Price isn’t going to drive developers into the arms of another vendor. Convenience, however, just might. Of the different competitors to AWS, Microsoft may have the strongest “convenience” story, because it’s able to marry Windows datacenter workloads with Azure cloud resources.
That’s a strong story, and it seems to be resonating.
Microsoft actually can serve as a role model for would-be Amazon usurpers. When you strike at the Amazon king, you must kill him with developer convenience, not with price reductions or stories of better performance, security, etc. Convenience sells developers.
AWS took a dominant lead with a strong developer story, and Microsoft may well be closing that lead through a differentiated, developer-focused story of its own. Game on.
Lead photo of Amazon CEO Jeff Bezos by Steve Jurvetson
View full post on ReadWrite
Columnist Larry Kim explains how a few PPC optimizations can make a big difference in your bottom line. It’s the little things that count.
The post 3 Small Paid Search Optimizations With Huge Impact appeared first on Search Engine Land.
Please visit Search Engine Land for the full article.
Now, at the dawn of a new iPhone (and other gadgets), it’s the perfect time to take stock of where iDevice popularity stands. And where it seems to stand is in the past.
According to data from mobile analytics company Localytics, it seems that old Apple gadgets are proving more popular than newer ones.
The iPad 2 is still the most used Apple tablet, while last year’s flagship iPhone 5S lags behind its 2-year-old predecessor, the iPhone 5. But that didn’t stop Apple from removing the latter from the store, in an apparent cleansing ahead of the company’s “big reveal” of new devices Tuesday.
The “S” Stands For “Still Can’t Beat The 5″
After a year on the market, says Localytics, the iPhone 5S couldn’t beat the previous model. More iPhone owners use an iPhone 5 than any other model, with a 27% share, edging out its successor by 2%.
Judging by launch period sales numbers, you might have imagined a different outcome. They’ve increased with each successive model—the iPhone 4 blew past $1.7 million in sales back in 2010; the 4S topped that with over $4 million in 2011, the iPhone 5 nabbed more than $5 million two years ago; and the 5S beat all those figures in 2013, exceeding $9 million in sales.
The iPhone 5 may be the most popular Apple smartphone now, but that’s going to change pretty quickly. People who bought that device on a two-year contract will be eligible for an upgrade this year, and if they buy another iPhone, they’ll probably go for the latest model.
Meanwhile, Apple pulled the iPhone 5 from its store. It stopped short of a thorough cleansing of all its old phones, though. Inexplicably the older iPhone 4S remains in stock. Localytics surmises it could be a ploy to attract low-end shoppers, but that seems redundant, considering the budget iPhone 5C is still alive and kicking. For now, anyway.
There’s reason to think 4-inch iPhones have a finite shelf life, whether Apple kills them all at once or not. We’ll find out for sure at the press event, but for now, it looks like the company won’t be developing any new ones.
Turns Out, Plenty Of Us Still Carry The iPad 2 Too
Apple’s tablet team must be scratching their heads over another intriguing detail.
Despite numerous releases since the iPad 2—including Retina versions, mini variations and a super lightweight model—that 2011 device is still the predominant Apple tablet, with 29% share of all iPads.
Apparently there’s no such thing as being long in the tooth when it comes to tablets. Certainly, people don’t upgrade them as often as they do phones. Apple also held onto the iPad 2 for a good long while, finally discontinuing it last March.
Apple Moves On From The Past
Critics may have taken aim at Apple’s previous refusal to put out an enormous “phablet”-style phone or fixated on its lagging iPad sales earlier this year, but CEO Tim Cook and his company are having the last laugh now.
Bloomberg notes that this time last year, the company’s stock was sagging and concerns arose about Cook’s ability to innovate and move Apple forward without any further lingering direction of departed co-founder Steve Jobs.
Fast forward a year, and the company’s stock is nearing a record high. Not even security concerns in the face of leaked celebrity iPhone photos can seem to bring the company down. Apple stock hit $98.36 by the end of trading on Monday, for an impressive increase of 38 percent from the year prior.
As for concerns about Cook’s leadership, the CEO will likely answer those criticisms at the press event. He’s reportedly delivering two of the most radically different iPhones than the company has ever produced before, along with a new smartwatch that bears no fingerprints from the Jobs era.
It’s essentially a show of confidence. Cook will need it Tuesday, when Apple enters brand-new categories that could determine the company’s immediate health and future course for years to come.
View full post on ReadWrite
Big Data challenges all of our assumptions about how data should be stored, processed and analyzed. But that doesn’t mean relational databases and other incumbent technologies are slouching toward obsolescence anytime soon.
That’s the view of Cloudera co-founder Mike Olson, who recently sat down with Bosch’s Dirk Slama to discuss the interplay between the Internet of Things and new data technologies like the distributed-processing framework Hadoop. Slama, who’s writing a book on the IoT boom, authors white papers and speaks regularly on the topic. As such, he was the perfect person to ask thoughtful questions of Olson and draw out some pretty insightful responses.
Thankfully, I got to listen in. Here are some of the highlights.
Big And Getting Bigger
While “Big Data” is often a misnomer—most enterprises struggle far more with kaleidoscope-esque data variety than mountainous data volumes—it’s absolutely the case that data volumes are increasing. Ninety percent of the world’s data was created in the last two years, according to IBM research.
[W]e are only seeing the very early days of IoT data flows, and already those data flows are almost overwhelming. Take the amount of information streaming up the smart grid, from taking readings once a month to 10 times a minute: That’s 150,000x more observations we are now getting per meter per month. Those data volumes are guaranteed to accelerate. We are going to collect more data at finer grain, and we are going to do it from a lot more devices in the future.
As Olson hints in that last response, the machines are to blame. He argues that “[t]he emergence of machine generated data has forced us to rethink how we capture, store and process data, and building very large-scale, highly parallel compute farms is now absolutely common.”
That “rethinking” is increasingly being done by a new generation of developers. While today there are just 300,000 developers contributing to IoT, a recent report from VisionMobile projects a whopping 4.5 million developers by 2020, reflecting a 57% compound annual growth rate and a massive market opportunity.
The Role Of Relational Databases
Will those developers still be using traditional relational databases to capture and process all that data? Yes and no.
Olson is quick to point out the ongoing relevance of relational databases:
If there was going to be a thousand times more data in the world than there is today—and that’s an easy number to believe—it stands to reason, that relational databases are going to continue to play a vibrant role in the market, by capturing and delivering business applications on a subset of that data.
But he’s equally quick to showcase an even bigger opportunity for modern data infrastructure like Hadoop:
The big opportunity for a new generation of database technology is not to go disrupt the existing OLTP or OLAP markets. It’s to unlock analytic power against new data flows, data that was never before available, to understand things about the world that we could never now before, because we did not have the information. So I don’t think this is doom and gloom for traditional databases. I think that a new market and a new opportunity in Big Data—driven substantially by IoT—creates huge opportunities for a new class of technologies.
Much of the data that enterprises consume as part of their Big Data projects is transactional in nature, and so very much the province of traditional databases. But that will continue to change as new types of data require new analytics.
No One-Size-Fits-All Solutions
All of which means that we’re in for a polyglot future, with enterprise data warehouses sitting side-by-side with Hadoop, even as NoSQL databases and their relational cousins commune together.
After all, Big Data is, well, big. By its very definition, it’s too vast and diverse for any one technology to completely master it all.
Still, Olson and others offering new data technologies argue that Hadoop’s data-handling volume and analytic flexibility mean that “you can just do stuff that wasn’t possible before,” thus unlocking new opportunities from all that data. It’s that new opportunity that has driven multi-billion dollar valuations for Cloudera and other startups, and has attracted serious product investments from Bosch and others.
Lead image of a Cubieboard Hadoop cluster courtesy of Wikimedia Commons
View full post on ReadWrite
International SEO is a Huge Opportunity For Marketers: Interview With Eli Schwartz by @murraynewlands
As part of our coverage from the sold-out Searchmetrics x Search Engine Journal conference in San Francisco on SEO, content marketing, and analytics, I caught up with Eli Schwartz of SurveyMonkey to discuss the opportunities marketers are missing out on with international SEO. In the video below Eli explains the importance of international SEO, and how you can easily optimize your content for international audiences to increase traffic and conversions: Here are some key takeaways from the video: There there is a massive opportunity to get traffic and conversions internationally — even if you don’t really have global products. You can do […]
View full post on Search Engine Journal
A beautiful aspect about Google’s Android operating system has always been the fact that it allows for developers and enthusiasts to strip away the platform’s core experience and replace it with homebuilt customized versions. Custom ROMs have been part of Android since nearly the beginning.
So it is natural that custom ROMs have now come to Android Wear, Google’s version of the operating system that runs on smartwatches and wearable devices.
Android developer Jake Day has released one of the first custom ROMs for the LG G Watch, one of the first two Android Wear watches to hit the market. Day posted the ROM on RootzWiki, an Android news and information site for developers and designers.
The ROM—nicknamed Gohma after a boss in the video game Zelda—is fairly simple. It improves battery life of the LG G Watch, speeds up overall performance, reduces lag time between notification cards and increases vibration intensity.
Gohma isn’t a full-blown Android Wear replacement. The ROM abides by the basic user interface design principles of Wear and the LG G Watch will still take over-the-air updates to the operating system from Google and LG (which will wipe out the ROM installation). Day makes sure to note that Gohma is a small release intended to improve performance and to make sure that everything is work well before releasing a fuller version of the ROM at a later date.
Gohma is fairly easy to install. Knowledgeable developers will just need to make sure that the device’s bootloader is unlocked and the ROM script will root the device and itself, allowing for the custom software to be installed.
Unleashing The Community: A Good Thing For Smartwatches
Android Wear generally leaves a lot to be desired. It is Google’s first go at smartwatch software and, initially, it is basically just a notifications device strapped to your wrist. For the time being, that’s perfectly fine as wrist-based notifications are a (surprisingly) pleasant way to receive messages. But Android Wear and smartwatches in general have much more potential than what is currently available.
Part of that is a hardware problem as engineers are naturally limited by the capabilities of currently available processors and sensors. But the hardware in the LG G Watch is almost the equivalent of a 2011 Android smartphone, so it should be able to do much more than the notification cards and voice interaction that is currently available through the initial release of Android Wear.
This is where the large community of Android developers has an opportunity to build on top of Wear through custom skins and ROMs to make it a better performing, more functional and attractive device. Day’s Gohma should just the start as the heavy hitters in the Android ROM community—like CyanogenMod—will surely get involved, pushing Android Wear development to further feats of utility and maturation.
The Android developer community doesn’t operate in a vacuum either. Google listens to developers and often implements features and requests that developers have built on their own to work around the limitations of stock Android. The Android development community is essentially one giant sandbox for Google to learn about what app builders and consumers want in the next version of the operating system. For the last six years, this process has worked well in helping to build ever better versions of Android for smartphones and tablets. Hopefully with the first custom ROM for Android Wear, Google can learn how to build better software for smartwatches as well.
Images: Gohma via HD Wallpaper. Android LG G Watch by Adriana Lee for ReadWrite.
View full post on ReadWrite
Windows is now truly one operating system, whether you’re on a smartphone, tablet or PC.
Windows Phone 8.1, Windows 8 RT and Window 8.1—that is, the phone, tablet (sort of) and PC flavors of Windows—are no longer distinct operating systems that largely look alike but vary wildly under the hood. Microsoft has spent the last couple of years updating its disparate Windows versions so that they work together with the goal of letting developers write one app and deploy it—after some tweaking to the user interface—to Windows PCs, tablets and smartphones.
True, Microsoft’s operating system naming conventions are still awful. But that shouldn’t obscure the major step forward this code-base unification represents to developers, nor the benefits that will flow to users as a result.
All three flavors of Windows now run on a common software core, or “kernel,” with a common runtime (i.e., the set of tools necessary to run programs). The major remaining differences between them have mostly to do with how they handle user-interface issues across a variety of devices, input methods (think touchscreens vs. mouse and keyboard), hardware (not just CPU and memory, but graphics processors, accelerometers and other sensors) and screen sizes.
Microsoft knows that those differences still present obstacles for developers, and hopes to address many of them with an update to its integrated developer environment, Visual Studio 2013, which it announced at Build 2014 this week.
Kevin Gallo, Microsoft’s director of the Windows Development Platform, describes it in a post on the Windows blog:
Write Once, Deploy To All The Windows
The Visual Studio update allows developers to port existing apps across devices and their specific versions of Windows. For instance, if you have a Windows 8.1 app, you can use settings in Visual Studio to target smartphone-specific capabilities in Windows Phone 8.1. Visual Studio is designed to let developers use the same basic app code across different devices and Windows flavors, and allows them to emulate how an app will behave in each case.
From Microsoft’s perspective, the two most important takeaways for developers are these:
- You can build universal apps and share all the code while just making tweaks to the user interface
- Visual Studio offers a variety of diagnostics tools to optimize apps for use on different device—smartphones running Windows Phone, laptops running Windows 8.1, etc.
Essentially, Microsoft wants to make it as easy as possible for developers to build Windows apps. Given Microsoft’s minuscule share of the mobile market to date, you can hardly blame it.
In practice, this means Windows Phone developers—and you know who you are— essentially have three options. If you’ve built your apps using the Silverlight Phone 8.0 development tool, you don’t have to do anything; they’ll continue to work as is on Windows Phone 8.1.
Alternatively, you can update your apps to Silverlight Phone 8.1 to access the new features in Windows Phone 8.1, such as the Cortana personal assistant and customizable homescreens. Or you can migrate your apps to the universal Windows app platform with the new tools in Visual Studio. Of course, if you prefer, they can also just start from scratch and build a “universal” Windows app to Microsoft’s specifications, which would theoretically optimize it for the new unified Windows code base.
Buy Once For All Of Your Windows
For consumers, Microsoft aims to make the process of buying an app easier. If you buy an app for your Windows 8.1 laptop, you can automatically download it to your Windows Phone or vice versa. Microsoft insists that you won’t need to buy separate apps for separate versions of the operating system because, essentially, Windows is now all one big operating system now. The same is supposed to hold true for in-app purchases within these apps—they should migrate from laptop to tablet to smartphone as well.
Apple doesn’t do this. If you buy an app on Mac OS X for your iMac or MacBook, you will still need to download or buy the same version for your iPhone or iPad. Google doesn’t do this, either. If you buy an app or extension for Chrome OS, you will still need to buy that app for Android on Google Play.
Some individual apps for Android and iOS, of course, do let customers download versions for different devices—for instance, via a subscription service or universal login. But that’s up to the app developer. It’s not required by Apple or Google.
View full post on ReadWrite
Press releases have a part to play in link building campaigns. Is your press release actually newsworthy? Is it getting to the right people? Is it focused? Is it structured properly? These are just 4 of 10 avoidable online public relations mistakes.
View full post on Search Engine Watch – Latest
Bitcoin exchange Mt. Gox went dark on Tuesday without much explanation beyond an unconfirmed and purportedly leaked document (embedded below) that alleged thieves had stolen 744,408 bitcoins worth $380 million from the world’s largest exchange and that it could “go bankrupt at any moment.”
That document, titled “Crisis Strategy Draft,” now appears to be genuine, according to none other than Mt. Gox CEO Mark Karpeles himself. Karpeles apparently confirmed that it is “more or less” legitimate earlier this week in an Internet Relay Chat with a self-described adviser to bitcoin investors.
Shrouded In Ambiguity
Like so much news about Mt. Gox, a notoriously tight-lipped, Tokyo-based company, this confirmation is shrouded in ambiguity. Karpeles’ online chat allegedly took place two days ago, at 10:39am ET on Tuesday, with Jon Fisher, an affiliate marketer in New York who told ReadWrite that he now advises “some of the largest private holders of bitcoin.”
Fisher posted the text of the chat on his website, WickedFire.com, on Tuesday roughly eight hours after it took place. Fox Business News reported the text of the chat on Tuesday. Fisher told me by email that the chat is genuine. There doesn’t appear to be any obvious way to contact Mt. Gox for comment.
In addition to laying out Mt. Gox’s woes, the crisis document outlined a four-point recovery plan for the exchange that included asking partners, investors and donors for bailout funds to continue operations; ousting Karpeles as CEO; and taking the exchange offline for a month in order to restructure both its underlying technology and its business.
In the course of the chat, Fisher asked Karpeles if the crisis-management document is “even legit.” The CEO, writing under the pseudonym “MagicalTux,” replied:
[11:04] <MagicalTux> more or less
[11:05] <MagicalTux> as the name suggests it’s a draft, and it’s a bunch of proposals to deal with the issue at hand, not things that are actually planned and/or done
[11:06] <MagicalTux> this said this document was not produced by MtGox
Bitcoin blogger and entrepreneur Ryan Selkis, who first published the crisis-management document on his blog, now reports that it originated with a global consulting firm called Mandalah. Selkis also reports that Mt. Gox’s initial attempts to solicit bailout funds from outside investors have gone nowhere.
A Casual Tone
During the Tuesday discussion, which took on an especially casual tone for a man facing the wrath of millions of angry customers, Karpeles said he’s still in Japan trying to make things right. “We haven’t given up,” he wrote.
Karpeles declined to say whether he plans to step down as CEO. But he said he’s unable to access his own money, and presumably that of Mt. Gox customers, although he added this caveat: “technically speaking it’s not ‘lost’ just yet, just temporarily unavailable.”
Members of the Bitcoin community were less than pleased with Karpeles’ lackadaisical tone. During the conversation he complained of gaining serious weight due to stress, referenced Batman, and—in an odd attempt to verify his identity—posted a picture of his cat asleep beside his keyboard.
“First public communication after shutting down site containing millions of dollars belonging to thousands of people who have been kept completely in the dark with no information about the status of their money. Posts a picture of his cat, links to a batman meme, and complains about getting fat. This f*cking guy,” redditor thesacred wrote.
Here’s the Crisis Strategy Draft:
Photo courtesy of Mt. Gox
View full post on ReadWrite