Posts tagged Cloud’s
We tend not to think about storage – until we don’t have enough. We carelessly store documents, emails, images, video, and massive amounts of all kinds data only to wonder why there never seems to be enough places to put our company’s stuff. But as new technologies combine to provide storage over the Internet, easing fears of limited capacity and the promise of virtualized architectures are helping shape the next phase of the Internet.
Storage Isn’t Sexy, But…
Virtual storage is neither as flashy nor as sexy as virtualized servers. Historically, enterprises set up a storage device, backed up data and content in regular intervals and forgot about it. But because hard drives offer limited capacity, it has become necessary to manage multiple storage strategies. Additionally, archiving digital content traditionally meant burning to a disc or transferring data to magnetic tape. The archived data and content was not readily accessible.
As cloud computing has emerged as a basic networking practice, more and more content is stored in virtualized, interconnected storage devices. Not only does this make it possible to access massive files online in an instant, it also makes storage more affordable, efficient and easier to manage.
By abstracting how storage functions from a set of individual physical hard drives to logical storage (or partitions) spread across any number of physical drives, storage can be made less expensive and much more flexible. With virtualized storage accessible in a cloud computing environment, companies and even individuals can now add as much storage space as they need, pretty much on demand.
Hardware vs. Management
For consumers, this means devices like smartphones and tablets do not require massive storage drives. For enterprises, virtualized storage means spending less on hardware and more on efficiently managing data and content. The trend meant companies can protect their remote office data and remove the need for multiple storage networks. Virtualizing storage also helps with disaster recovery by spreading the information to remote locations and providing multiple copies of data. The trend is toward continued efforts to make cloud-based virtual storage even more efficient and less expensive. Some enterprising companies have already managed their cloud architectures with multiple storage technologies so well that they’ve adapted their own capabilities to deliver Storage as a Service to other companies.
There’s still potential for further migration toward virtualized storage. Forecasts for the global cloud virtualization software market (currently estimated at $6.7 billion) between 2011 and 2015 show a year-over-year growth rate of 14.98%. Virtual machine and cloud system software represents the fastest growing segment, with research firm IDC pegging growth at 17.8% in the first half of 2012.
Investments in cloud-based storage also suggest future growth. Venture funding for storage companies totaled $458 million through the first three quarters of 2011, according to analysis from Strategic Advisory Services International. That is 42.4% more than the $321.5 million storage startups received in the same time a year before. Storage mergers and acquisitions are also on the rise with 23 deals adding up to $8.7 billion through the first three quarters of 2011.
Benefits Of Virtualized Storage
Outside of the obvious benefits of being able to access content from multiple locations on multiple devices, virtualized storage also allows for information sharing between large numbers of people. While it’s still a relatively new technology trend, storage virtualization isn’t hype. “But it’s all about the use cases,” says John McArthur, president of Walden Technology Partners and a board advisor at Starboard Storage Systems. “The use cases will evolve and mature over time, just as they are with server virtualization.”
McArthur points to making storage asset management less of a problem, where the goal is migrating data from one device to another without having to physically link them together. Other benefits include replicating data between locations, making point-in-time copies of data, expanding storage capacity, and shrinking storage costs. Additionally, virtualized storage allows for a “pay as you go” subscription model that can increase storage capacities as needed, without having to grow data center footprints.
“Some companies will embed storage virtualization in an appliance to make their appliance simpler to manage and control,” McArthur said. For example, hedge fund Thames River Capital virtualized its storage area network and saw a 40% improvement in the performance of its virtual machines as a result.
Virtualization Meets The Cloud
As the technology improves and devices continue to be connected to each other, cloud computing will increasingly merge with virtualized storage. One consideration is using cloud versus physical storage for high-performance computing at scientific research centers, according to John Bates, co-founder and CTO at TwinStrata.
“Cloud storage can solve some of the problems associated with big data, particularly in the areas of resource planning and infrastructure growth costs,” Bates told industry reporters. “Cloud storage offers massive and automatic scalability, without requiring heavy capital expenditures on fixed storage systems that may reach capacity too fast.”
The Internet Of Things, And More
Another use case for cloud-based virtualized storage is enabling a wide variety of non-computing devices connected to the network, also known as the “Internet of Things.” In its estimates for 2020, IDC believes approximately 30 billion devices will be connected, each requiring cost-effective use of software and storage for the information gathered.
Considering the capabilities being developed for the next phase of the Internet, it’s not much of a stretch to think that virtualized storage could be used to recreate virtual versions of specific events at specific points in time. A wide array of networked storage devices would hold the information from computers, sensors, cameras and other information sources to quickly recreate almost any event or scenario.
View full post on ReadWrite
“Immature” is one of those code words that execs trot out when they’re faced with disruptive competition that’s likely to eat their lunch over the long haul. For example, VMware CEO Paul Maritz likes to dismiss OpenStack as immature. While OpenStack may not feature parity with VMware’s cloud offerings, it’s likely to be “good enough” much sooner than Maritz cares to admit.
Maritz was saying that VMware’s “greater, nearer-term challenge will probably come from Microsoft.” This is classic framing, trying to ensure that customers compare VMware’s offerings against a similar proprietary offering. But customers will increasingly employ the long view and consider open cloud options as well.
Immaturity Doesn’t Last
Open clouds, OpenStack being just one, are following a similar development trajectory to Linux and open source databases. In phase one, the technology is for early adopters and not at all comparable with proprietary offerings. Early adopters and companies that take a long view and/or have a good reason to disrupt the market deploy and contribute to the projects to push them toward maturity.
Not long after this, the tech remains immature but is considered good enough for many workloads. It starts pushing into mainstream usage more and more with each release. IT staff tests it and finds that it can get the job done without having to waste budget on the proprietary software.
At this stage, proprietary vendors like VMware can still make a case for their products for high-end deployments, and for being better documented and/or having more features. But for many customers, that doesn’t matter. The tech is good enough to get the job done, and the money not spent on software licenses easily compensates for the gap between the open source project and the proprietary alternatives.
OpenStack, et al, are very close to good enough, or have already reached that stage for many workloads. Quite a few companies are already deploying CloudStack and Eucalyptus.
From here, it’s not going to be long before Infrastructure as a Service (IaaS) is just as much a commodity as the compiler, operating system, database, hypervisor and hardware.
VMware has already lived through this cycle at the hypervisor level. It wasn’t that long ago that VMware folks were claiming that the hypervisor was still a value proposition. Ultimately, the company had to concede that the hypervisor had become a commodity component and made ESXi free to customers. The value had moved up the stack to management tools, where VMware still had an advantage.
But open tools are once again nipping at VMware’s heels, beginning to make the IaaS stacks a commodity proposition.
Take the Long View
Despite the relative immaturity of open IaaS platforms, it makes sense to start evaluating them now with an eye to standardizing on an open stack (but not necessarily OpenStack).
One, it’s almost certain that the platforms will reach maturity soon. Naturally, the larger the user and contributor pool for each platform, the better.
Secondly, if a company hasn’t already made an investment in a proprietary platform, why lock yourself in now? While VMware or Microsoft would no doubt like to lock up the private cloud market, buying into those platforms now is a good way to ensure that you’re going to be stuck with them in the long run.
Maybe VMware or Microsoft have the right platform for your environment, but the current state of OpenStack shouldn’t be the deciding factor. All of the open cloud projects have been maturing rapidly, and it’d be foolish to assume they won’t be ready for your workloads sooner rather than later.
Image courtesy of Shutterstock.
View full post on ReadWriteWeb
Last week, the CloudStack controversy questioned whether there can be more than one open source cloud. Evan Prodromou, founder and CEO of StatusNet, says no, arguing that it’s “going to be better for everyone” if there is a single contender for open source cloud computing thanks to network effects.
Prodromou makes some good points, but glosses over the reality of the cloud computing market – not to mention quite a few healthy and successful open source projects that compete pretty heavily.
Here’s part of Prodromou’s argument for a single project to rule them all:
Users are more likely to try out a package that is well supported, has lots of integrated software, has commercial and community efforts around it, as well as documentation, books, and so on. Contributors are more likely to work on new features and fix bugs for projects that they know have legs. Third-party developers are also more likely to develop for platforms that have lots of users. Integrators use platforms they think will last. People blog and write mainstream press articles about bigger projects.
All of this is true, as far as it goes. Developers don’t want to deal with 10 different platforms. Projects that have few users and little interest are unlikely to garner much press, which helps keep them small.
The problem is that Prodromou goes on to assume that competing open source projects will automatically “keep each other small” and “impede growth of the entire market.” He also ignores the reality of the existing market, and the politics and motivations of the players involved.
We Don’t Live in a Perfect World
In a perfect world, all the participants would play nice and work together for maximum benefit. But we don’t live in a perfect world. If open source has taught us anything, it’s that even when participants share a big-picture goal, they’re likely to disagree on how to achieve that goal or have additional motives that hinder cooperation.
It’s often proposed, for instance, that Linux would have had a better chance of success on the desktop if everybody would just work together instead of pursuing so many different projects.
In a perfect world, that might be true. But it ignores the personalities and individual goals of the organizations and developers participating in GNOME, KDE, etc.
In theory, Linux could have conquered the desktop market – if only the companies and developers could have unified behind one project.
In reality? There’s no way to get all the different parties behind one project. Even if you had, you would have lost a lot of existing Linux desktop users who would have been left out in the cold by the resulting homogenized atrocity that likely would have been the result of trying to combine everything. Many developers who were interested in the success of KDE, for example, have no interest in the success of GNOME.
Developers are not interchangeable units of work that can be applied to any project. Likewise, the general success of open source cloud computing is not as valuable to a company involved in OpenStack, CloudStack or Eucalyptus as the success of a specific platform.
So there’s little chance that we’re going to herd all the cats in any general direction -even if it is ideal.
Growth and Value
While the idea of all the cloud vendors and developers aligning to fight the good fight may be appealing, it’s not only unlikely – it’s also unnecessary.
Prodromou argues that competing factions are going to “impede” the market. Perhaps there’s a Platonic ideal market size we could reach with a single open source cloud project, but that doesn’t mean having several projects will slow the growth of the open cloud market.
No doubt, one or two of the efforts will ultimately fail. But the cloud computing market is not being driven by CloudStack, OpenStack and the rest. Instead, those projects are being driven by market demand for open source cloud software. The cloud market is young and growing rapidly. Now is the perfect time to see the results of several competing projects, rather than a single effort that tries to be all things to all customers. If they’re able to differentiate themselves for different workloads and market segments, they will be much more valuable as separate entities than a single generic effort.
Take a step back and look at some of the other competing projects. You have MySQL and PostgreSQL, for instance. Have those projects hindered the market for open source databases? Hardly. Look at the number of NoSQL databases – have they harmed the market? Is it smaller because of the number of projects, or has it grown faster because different projects are more suitable for different use cases?
Likewise, a dominant open source project doesn’t always benefit the market. When Firefox was the primary open source browser, it suffered from bloat and glacially slow improvements. Competition from Chrome has spurred Firefox to improve much more quickly.
Apache managed to climb to the top of the heap as the premier open source Web server – but wasn’t meeting the needs of many companies. That, eventually, gave rise to Nginx and others. The competition between Nginx and Apache didn’t hurt anybody, did it?
Perhaps Prodromou’s view is colored by the fact that he’s looking at open source social media efforts that have failed to take off in any meaningful way. The network effect Prodromou cites looms large when you’re dealing with social networks, because the value truly lies in the number of users on any given network. A social network that doesn’t have your friends and colleagues is of no interest. But it’s less clear whether open source efforts in that direction would have fared better if there were only one.
There’s plenty of room for two or three open source cloud efforts. The CloudStack announcement last week isn’t harmful for the larger movement at all.
View full post on ReadWriteWeb
This morning’s announcement that Citrix would be contributing CloudStack to the Apache Software Foundation (ASF) is a big win for Apache – and a minor loss for copyleft. With the change, only one open-source cloud infrastructure player (Eucalytpus) is hewing to the copyleft model.
OpenStack, OpenNebula, and soon CloudStack and OpenShift, are provided under the Apache license. If you’ve been watching open-source licensing trends, this may not come as a shock. The figures from Black Duck and other sources indicate that the GNU General Public License (GPL) family has been on the decline for some time.
The numbers don’t tell the full story, of course. Looking at a wide swath of projects across several open-source hosting projects doesn’t give a perfect picture. Many of the projects being counted are of little importance, and will wind up abandoned soon after they start.
But there does seem to be a clear trend that corporate-sponsored projects are trending away from copyleft licenses. When you’re building cloud infrastructure software, this might be a bit of a problem.
To Copyleft, or Not to Copyleft?
This morning I spoke to Mårten Mickos, CEO of Eucalyptus Systems, and former CEO of MySQL AB. In both cases, Mickos has helmed companies formed around products with GPL’ed software – GPLv2 in the case of MySQL, GPLv3 in the case of Eucalyptus. Note that Mickos did not actually choose the license in either case – he was brought in after that decision had been made.
But Mickos says that he’s a believer in the “full openness of the code,” which includes protecting the “four freedoms” espoused by the Free Software Definition.
Mickos says that it’s not the license that a project is under that’s as important as the way the project is governed. “The issue of contribution is up to the steward of the project; you can run a project so that you get lots of contributions or so it doesn’t get lots of contributions.”
It’s certainly true, and Eucalyptus is an example of this – that governance and the way a project is managed makes an enormous difference. Eucalyptus has not seen the level of contribution that it wanted, so the company brought on former Red Hatter Greg DeKoenigsberg to put the community back on track.
Eucalyptus may be attracting more contributors now through better governance and such, but it doesn’t seem to be attracting corporate contributors. (They have signed a deal with Amazon, but it does not include Amazon contributing code to Eucalyptus.)
When it comes to individual contributors, the choice of license may not be as important. Some developers have strong preferences about licenses, but as Mickos pointed out this morning, it’s usually a matter of how a project is run and whether it’s useful to the developer. If a project has good governance and has a reasonable infrastructure for contributions, the choice of open-source license may matter very little.
Institutional Preference of License
Things are quite different when you’re trying to attract hardware and software vendors that might prefer a license that allows proprietary re-licensing.
As Citrix’s Mark Hinkle told me yesterday, a lot of vendors have a strong institutional preference for nonreciprocal licenses. It’s usually described as being “commercial-friendly,” which is a euphemism for “we don’t have to give back code if we don’t want to.”
It doesn’t really matter what an individual developer likes or dislikes if the employer decides it only wants to contribute to a non-copyleft project. This morning, Mickos agreed that hardware vendors like Dell and HP seem to prefer permissive licenses like Apache – but said that “advanced end users” like Cornell University will contribute to copyleft projects if they enjoy using the technology.
That may be true, but it’s looking more and more like corporate contributors – including open-source stalwarts Red Hat – are preferring noncopyleft licenses for cloud infrastructure software.
Since much of cloud development is being driven by corporations, and not individual developers, this seems like a tough trend for copyleft supporters and projects.
View full post on ReadWriteWeb
I’m not one of the people that thinks that “private cloud” is an oxymoron. But vendors that are trying to offer self-hosted alternatives to Google App Engine, Amazon Web Services, Heroku, and the rest need to understand that cloud deployments are supposed to be about reducing friction of deploying and running services. That includes the friction of having to deal with undisclosed pricing and sales pitches that enterprise software has been saddled with for way too long.
One of the things that I really like about providers like Amazon and Heroku is the simplicity of dealing with their service. You check out the sites, evaluate the pricing and test out the free tier to see how it handles. If all goes well, it’s time to start putting services into production.
The downside to Heroku and AWS, of course, is that you’re running on someone else’s architecture. There’s a legitimate demand for private PaaS and IaaS services, and more and more vendors are trying to elbow their way into that business. The problem is that some of the new entrants are hoping they can continue using the old and busted enterprise pricing models.
In the past two weeks, I’ve had run-ins with vendors that completely refused to disclose any real information about their pricing. And this is after one vendor promised during a briefing to provide pricing after the call, and the other weaseled out of providing pricing info after being told it was a condition of my taking the briefing.
Vendor one sent this follow-up, in lieu of pricing: “[product name] pricing is based on a number of factors. The bottom line is that it’s priced to be comparable to what a company would pay to have their own people run it in-house, but in addition, you get the power of [company identifying slogan redacted] and a team of cloud experts with years of experience running one of the largest clouds in the world…” In other words, it’s too complex.
Complexity is No Excuse
The excuse for not providing pricing in most cases boils down to complexity. Vendors say that pricing is too complex to disclose because it depends on too many factors. If that’s the case, I’d recommend two things. First, simplify your pricing structure. Second, provide a baseline or some sample scenarios so that customers have some idea what they’re getting into before having to engage with sales.
If you look at Amazon’s pricing, though, you see what a sham the complexity excuse is. Amazon’s pricing just for EC2 depends on how many instances you’re running, what type of instances, whether you commit to a certain number of instance hours and a number of other factors. Bottom line, though? Amazon’s pricing is fairly transparent.
That doesn’t prevent Amazon from having variable pricing for volume customers. If you want more than 500TB of data transfer per month (for example) Amazon directs you to its sales folks. But customers have a pretty good baseline of what Amazon’s pricing is. There’s room for negotiation and volume discounts, without leaving customers clueless. Convirture is another example. Their pricing is variable but they at least provide a starting price that gives customers something to start with.
Why so Secret?
As fellow RWWer David Strom says, priceless is not a marketing strategy. Wrong pricing, says Strom, “can turn the most amazing product into a dog, and not putting the pricing online… just makes everyone more frustrated and the chance to lose a customer to a competitor, where this information is clearly stated.”
Just why are vendors so secretive about pricing? It might not be because the pricing is too high. Evan Schuman, editor at StorefrontBacktalk.com, says “it’s just as often a fear of the price being too low. For a certain prospect with a certain budget, the vendor might be able to charge more. It’s the old ‘how much is it? Well, how much do you have in your pocket right now?’ game. Customized pricing cuts both ways. Yet another reason to insist on meaningful numbers.”
No Pricing, No Story
As is happens, I do insist on meaningful numbers. Without some hard pricing information, and not just a handwavy “pricing is variable” I’m not covering the product or service, period.
It turns out, I’m not the only member of the tech press with this policy. In polling other folks, like Schuman, I found universal agreement that pricing is almost always a requirement for a story. Pam Baker says, “readers who are interested enough to read a particular review or product story want to know the price. So, if a vendor refuses to disclose pricing, or at least a pricing range to cover the various purchase options, then I either won’t cover the product at all, or I’ll note to the reader that the undisclosed price is probably kept secret because it is way too high.”
Of course, there are some exceptions to the “no pricing, no story” rule. When companies announce a product in development, it’s not uncommon that pricing isn’t set. When you’re a long way out from the supported offering, it might still be newsworthy.
But when something is going from a closed beta to general availability, as was the case with the vendor that weaseled out of pricing information, there’s no excuse. It’s one of the first questions that the customers are going to ask, and it belongs in any news coverage of the offering.
Cloud vendors that want to compete with AWS, Heroku, Engine Yard, Google, and the rest need to bring their pricing practices into this century. Had problems with vendors and pricing? I’d like to hear your stories in the comments.
View full post on ReadWriteWeb
If you run multiple cloud providers in your shop and are looking at ways to connect them with virtual networks, then vCider with its Virtual Private Cloud v2 release has something you should take a closer look at. The service can help you create private links between the different providers, just like you can use ordinary VPNs to connect external networks to be virtually inside your data center.
For example, let’s say you want to use Rackspace’s Mongo DB service but want to use Amazon for storage. vCider will put encrypted tunnels between the two IaaS providers and give you private IP addresses for your traffic to traverse. One customer, a biochemical library, is running a Cassandra database in Holland and using AWS in the US for storage and has connected the two locations. They claim the latencies are small and network performance isn’t an issue. Using vCider means you don’t have to deploy OpenVPN or other equivalent solutions too, which in some cases provides a big boost in performance. The customer cited above saw a six percent improvement by forgoing OpenVPN.
New to the v2 version is a virtual network switch that can cloak your network access, so only traffic from the encrypted tunnel is allowed into your servers.
vCider is available now for any cloud-based instance of modern Linux kernels v2.6 and above. It is not yet available for any Windows instances. You can deploy up to eight systems free of charge, and prices start at $100 per month for up to 16 systems in a virtual private cloud. You can get more information on vCider here.
View full post on ReadWriteWeb
South Korea Ready to Intervene as Kim's Death Clouds Outlook
By Eunkyung Seo and Andy Sharp Dec. 19 (Bloomberg) — South Korea said it's prepared to intervene in markets as the death of North Korean leader Kim Jong Il clouded the outlook for an economy already faltering on weak export demand. …
S. Koreans in disbelief, but life goes on
NEWS | SAN DIEGO Kim's death not 'rating trigger' for South Korea, Fitch says
View full post on SEO – Google News
In this week’s Search In Pictures, here are the latest images culled from the web, showing what people eat at the search engine companies, how they play, who they meet, where they speak, what toys they have, and more. Yahoo Security Trucks: Matt Cutts Answering Questions At PubCon: Google…
Please visit Search Engine Land for the full article.