Posts tagged past
It’s a computer, but there’s no monitor. Or fan, or keyboard, or even a case, for that matter. But the credit-card sized Rasperry Pi is still getting snapped up by consumers: less than two years after the first Pis shipped, over two million have been sold.
Raspberry Pi falls into a category of computing device known as a miniboard, where the bare components of a computer—processor, video interface, USB ports and memory are lashed together on what amounts to a circuit board.
But from such a simple device, many things can be created. By plugging in external storage, a monitor, and a keyboard, users can have a Linux computer running in minutes. Or build sophisticated electronic devices like a media stramer or an Internet radio.
The flexibility of Raspberry Pi is certainly an attractive feature. So, too, is the price. The two models of the Raspberry Pi cost $25 for the Model A and $35 for the Model B. Both models feature a 700-MHz ARM processor on a Broadcom system-on-a-chip board, with 256 MB of RAM and an SD/MMC/SDIO card slot for onboard storage. The big difference between the two models is that the extra $10 will get you a 10/100 Ethernet port and a second USB port in the Model B.
Bringing Code To The Masses
Two million devices sold is quite an achievement for a project that has its roots in trying to decrease computer illiteracy.
In 2006, team members in the University of Cambridge’s Computer Laboratory in the United Kingdom noticed a sharp decline in computer skills in A Level students accepted into their program. Worse, it was a trend they could see being repeated in other nations besides the UK.
Despite the proliferation of personal computers, or perhaps because of it, kids were no longer playing around or experimenting with PCs. Instead, they were using apps as they were presented, or just buying and downloading new ones to do what they wanted. Hacking and coding, it seemed, was going out of style.
The Cambridge team, lead by designer Eben Upton, began to put together a small, portable, and very inexpensive device that would boot right into a programming environment. From there, a student of any age could start coding to their heart’s content.
By 2008, the device now known as the Raspberry Pi had completed the design phase and was ready for production. The Raspberry Pi Foundation was founded that year, and after three years of fundraising and production, the Pi devices were rolling off of the assembly line in February 2012.
The team is stunned by the project’s success, even as they work on improvments to the popular miniboard device.
We never thought we’d be where we are today when we started this journey: it’s down to you, our amazing community, and we’re very, very lucky to have you. Thanks!
Image courtesy of Wikimedia.
View full post on ReadWrite
A report released by my company, Shareaholic, a social plugin service used by 200,000 publishers reaching 250 million people monthly, reveals referral traffic trends from 8 of the largest social networks over the past 13 months. The data (pictured below) shows strong year-over-year growth for Facebook, Pinterest, Twitter, YouTube, and LinkedIn, decline for StumbleUpon and Reddit, and […]
The post Facebook, Pinterest, and Twitter Traffic Referrals up 54%+ in Past Year by @dannywong1190 appeared first on Search Engine Journal.
View full post on Search Engine Journal
Seo In Guk talks about Girls' Generation + his past dating rumor with YoonA
It appears Seo In Guk has quite a connection with Girls' Generation. He not only worked with them in productions but also even had a dating rumor with one of them at one point. He had worked with YoonA previously for the drama 'Love Rain' and more …
View full post on SEO – Google News
Future Tech is an annual ReadWrite series where we explore how technologies that will shape our lives in the years and decades to come are grounded in the innovation and research of today.
You’re at a cocktail party. As people jostle past you in a loud room, you turn your head to listen to the fascinating raconteur who just buttonholed you. In so doing, you’re placing one ear closer to the speaker’s mouth than the other.
What you just did, drink in hand, is an instinctive physics experiment. You’ve subtly altered the amount of time that it takes for sound to reach each of your two ears. Without knowing it, you just performed an act that researchers call adaptive view forming.
Cocktails And Change
The Cocktail Party Question is something that researchers have been looking at since the 1950s. At the time, nobody in their right mind could have imagined what it would become.
In the mid-1990s, several scientists at Microsoft’s giant research division wanted to take a closer look at the concept of adaptive view forming. They started focusing microphones at a single spot. Specifically, they were looking at microphone arrays based on something called a minimum variance distortionless response (MVDR) beamformer. By the early years of this millennium, they’d built a nine-microphone prototype that they brought into a few hundred homes in the Seattle area to test.
Enter Alex Kipman, part of Microsoft’s Xbox team. He was looking for a new controller for the Xbox video-game console. The microphone arrays in the Microsoft Research lab provided a great opportunity, when mixed with other innovations such as infrared image sensing, to create a game controller that could capture human movement.
The result was the birth of Microsoft Kinect. At the time of its release, it was the fastest-selling gadget in history, with 8 million units sold in the first 60 days.
Multi-array microphones are just one example among dozens in how different fields of research laid the path over decades for the Kinect to come to market. But you can see what happened: Researchers pondered a theoretical question and formed explanations. They turned those explanations into applied models. Eventually, an enterprising entrepreneur looking to solve a real-world problem seized upon those models as the solution.
That’s the classic model for long-term research and development. What is being upended by the Internet is what happens next. Products no longer march solemnly from lab to market: They ping-pong back and forth in a tighter and tighter loop, with mercantile realities inspiring academic progress as much as the other way around.
For example, Kinect technology is now being used by researchers across the world in places like the MIT Media Lab to study how to create 3D models, imaging and computer vision. One era of innovation begets the next.
And so we continue to evolve, faster and faster than before.
This is how innovation really works.
We give most of the credit in today’s technological world to entrepreneurs who come up with interesting ways to apply innovations that took researchers years and decades to create.
But we are living the future that those researchers spent so much time building. Smartphones, laptops, tablets, the cloud, sensors, video games, automobiles, nuclear energy, fracking … everything that shapes our society today is derivative of some smart person exploring the art of the possible, then pushing past it. Entrepreneurs may be the ones who get it into our hands, but they didn’t start the journey.
The future is now. Yet we—humanity—are just getting started exploring.
A Map To The Future Present
Hence ReadWrite’s Future Tech series, an annual journey into innovations both ubiquitous and unforeseen. Scientists and researchers are still asking questions big and small that will lead us to places that we could never have possibly imagined. At the same time, we can look at what researchers are doing today in corporations, think tanks and universities across the world to see the seeds of tomorrow.
We see the future everyday. Innovation is happening faster now than it ever has in the history of human evolution. It took centuries, then decades, for the fruits of laboratory experiments to reach store shelves. Now it can take less than a year.
The reason for this is that systems have been built that enable innovation, allowing people to build efficiently on layers of technology already laid down by their predecessors.
The most prominent example of this is the Internet. At its core, the Internet is a constantly evolving, decentralized platform the is the spine of just about every technological project being undertaken. Information is more readily available than ever before, enabling researchers around the globe to collaborate in real time.
For instance, look at the Large Hadron Collider at the European Organization for Nuclear Research (CERN). Physicists are hurling atoms at each other at mind-numbing speeds, looking for the existence of the “God Particle” (also known as the Higgs Boson). Other physicists across the world can extract the data sets from the Large Hadron Collider almost as soon as it happens and perform their own research. The power of the collective brain has never been more prevalent. And the Internet plays a massive role. It’s no coincidence that the World Wide Web was born at CERN.
On a smaller scale, we see consumer products evolve faster than ever. Upgrades used to come a few times a decade; now new models arrive annually, with over-the-air software updates wirelessly transforming our devices in ways both big and small.
We have built a technological infrastructure that allows innovators to build quickly and cheaply (in comparison to universities and think tanks). Industrial design and manufacturing processes have been streamlined and globalized. Apple can build an iPhone and add to its computing power every year. The changes may seem subtle over time, but the years go by and you realize that the iPhone you have in 2013 is lightyears more sophisticated than the one you had in 2007. It is from this infrastructure that the next generation of technology will emerge.
Over the next month, Future Tech will explore the possibilities of tomorrow grounded in the innovation of today. Over the next several weeks we will explore computers that can think for themselves, how to hack the human brain, how dust has become “smart,” the evolution of artificial intelligence and much, much more.
The future will be fascinating. Perhaps also terrifying, if you imagine some of the possibilities. Either way, we must understand it. Join us as we explore the possibilities.
View full post on ReadWrite
In September 2013, 188.7 million Americans watched 46 billion online content videos, while the number of video ad views totaled 22.9 billion. AOL (fueled by its Adapt.tv acquisition) served the most video ads – 3.7 billion to Google’s 3.2 billion.
View full post on Search Engine Watch – Latest
As fantastic as cloud computing sounds, one of the greatest obstacles to its adoption is all the software that isn’t yet in the cloud. Which is to say, virtually all software used by most enterprises of reasonable size. While the industry anticipates the problems inherent in connecting the dots between services running on different clouds, today’s enterprise must fixate on connecting its crufty old software assets with the cloud.
It won’t be easy.
But it will be critical. According to Forrester, “Hybrid enterprise IT landscapes have become the norm,” with “Data and application logic… spread across different packaged and custom-built applications on premises, on managed infrastructures, and in the public cloud.” This complexity is only going to get worse.
A Brave New World
While developers have embraced the cloud with gusto, they too quickly stumble into pesky old world requirements like security and compliance. However much a developer may want to march boldly into the future of computing, an enterprise’s first obligation is to software and services that are secure and compliant with a host of regulations. It’s like having cold water dumped all over one’s dreams.
Indeed, as Ed Laczynski (@edla), SVP of Cloud Strategy & Architecture at DataPipe, notes:
— Ed Laczynski (@edla) August 21, 2013
Chris Hoff (@beaker), VP of Strategy and Planning at Juniper Networks, echoes Laczynski’s concerns, identifying “the shackles of legacy security/compliance colliding with the next iteration of platforms” as the major barrier to integration with cloud platforms.
Is It Really A Security Thing?
But is security really the biggest problem? It’s absolutely a big issue, but arguably a larger issue is the difficulty of translating enterprise business rules to cloud systems, as PBSI’s Natalie Kilner Hughes argues. According to Forrester, enterprises are at varying stages of running business processes in a hybridized world of cloud and legacy apps:
Kilner’s solution is to turn to emerging technology platforms that automate migration of legacy cloud to the cloud, “saving up to 80% of the cost versus rewrite or replace options while ensuring functional equivalence.”
While this sounds fantastic, my hunch is that many CIOs are going to be reluctant to trust such a migration tool. As LogicWorks’ Jake Gardner suggests, a lack of in-house expertise “or the historical data to prove out the idea of moving” an application leaves enterprises unable to “properly project the savings,” causing them to hesitate to move a legacy application to the cloud.
They might, however, be willing to turn to the ESB (enterprise service bus) technology they likely already have running in house. The catch is that they may not fully trust ESBs, either, given that they’ve long promised more than they’ve actually delivered.
Still, more modern ESBs like MuleSoft could prove to be more productive. MuleSoft bills itself as a legacy-to-cloud integration expert, and has been doubling revenues every year for several years, suggesting that enterprises are turning to it, and probably others like it, for help.
Solving The Operations Mismatch
At its heart, the problem with cloud integration for developers is operational in nature. As James Urquhart (jamesurquhart), vice president of Product Strategy at Enstratius (acquired by Dell), posits:
For developers, the challenges are largely operational issues, not architectural. Differences in images, DNS, load balancing, etc. Of course, integration of cloud with existing IT is almost always an app and data integration problem, though monitoring also problematic.
There are solutions for this problem. Enstratius, for one, enables integration at the application level. It lets developers focus on building her application, while Enstratius takes care of running it on multi-cloud architectures. But for those enterprises that are less concerned with management and governance and are initially concerned with integration legacy applications with modern clouds, the real trick is to integrate legacy applications and their governing rules.
There’s no easy fix for this. Part cultural, part technical in nature, the difficulty of embracing the cloud without leaving enterprise legacy systems behind is a big problem. Fortunately, it comes with a big cash prize for whichever company solves it first.
View full post on ReadWrite
Lessons from the past: Mount Laurel SEO firm’s founder learns as former … – Cherry Hill Courier Post0
Lessons from the past: Mount Laurel SEO firm's founder learns as former …
Cherry Hill Courier Post
Ken Wisnefski is founder and CEO of WebiMax in Mount Laurel. WebiMax is nationally known, opening offices around the world and seeking to bring in about $15 million this year. / JOHN ZIOMEK/Courier-Post …
View full post on SEO – Google News
Shopping cart abandonment has become one of the major concerns for many online business owners. In accordance with various reputable industry researches, like those by Internet Retailer, Baymard, Forrester, etc., the average shopping cart abandonment rate has risen to almost 70% for the last 3-4 years. The most crucial consequences of this negative tendency are [...]
View full post on Search Engine Journal
A collection of statistics released this month is creating doubts about the trend of “cord cutting” – when home viewers replace cable TV service with streaming video-over-Internet and over-the-air content. Cable companies are declaring victory, but when you dig deeper, there are signs that cable is still in trouble — and that what we’re hearing are the sounds of denial.
In its Fourth-Quarter 2012 Cross-Platform Report, ratings service Nielsen reported that in the U.S., there were more than five million households in 2012 that fit its definition of “Zero TV” homes. Zero TV is Nielsen’s neutral, but still kind of inaccurate, description of cable-cutting households that get video entertainment via computer, smartphones and tablets.
Five million homes seems like a lot, especially when you consider that this is up from two million homes in 2007. Indeed, there were a lot of headlines proclaiming “Cable Cutting Up 150%! Comcast in Flames! Time Warner Out of Time!”
Well, actually, nothing like that. Because in reality, that’s just 5% of the total TV market. Hardly enough for the cable companies to get worked up about. Comcast CEO Brian Roberts has repeatedly made public comments dismissing the impact of cable cutting, and for now it appears that he’s right. Cable’s dominance would seem to reflect that there is not much to worry about with these cable companies.
Of course, that’s what the Empire said about the Rebel Alliance.
Or, you know, what the telephone carriers once said about people who were giving up land-line phones in favor of wireless. The carriers used to insist the trend wasn’t real, until better cell coverage and services like E911 accelerated it to the point that no one could deny it any more. Telco companies now offer TV and Internet service. Cable and satellite TV company may face a similar shift.
Pay TV Numbers Aren’t So Hot, Either
Another set of statistics were released this month that point to a troubling sign for the cable and satellite companies: SNL Kagan reported that multichannel service providers (cable, satellite, and telco) managed to add just 46,000 customers in 2012, a lot of it in the fourth quarter, when 51,000 mew customers managed to reverse the shrinking number of subscribers in the second and third quarters of last year.
Forty-six thousand new users, out of a total of around 100.4 million, isn’t even a statistical blip — 0.04% growth is by most definitions flatter than a pancake. The average year-over-year growth of Zero TV homes was pretty low, too – 0.59% since 2007 — but that’s still a a factor better than paid TV subscriptions last year. You have to wonder if the television providers’ claims that subscriptions were slow just because of the economic downturn were entirely accurate.
The U.S. is still in a slow recovery, so we will have to see if the upward trend of pay TV subscriptions continues before making any determination about pay TV’s flatline growth being connected to the economy.
For all of the hand-waving about cord-cutting “not existing” or being unimportant, a key fact is being blissfully ignored: those 600,000 new Zero TV users each year have to come from somewhere. They are either existing cable TV customers or incoming customers who have decided to go to the Internet/streaming model instead. Either way, that’s 5 million customers the pay TV providers don’t have.
Last year, the NPD Group estimated that the average monthly cable bill would hit $100/month sometime this year or next. Using that estimate for some back-of-napkin math, that means $6 billion in annual revenue is not going to pay TV.
Is it any wonder, then, that Comcast recently introduced a free sampling of its premium on-demand content in order to pull in more ongoing subscriptions to that content? Speculation about this promotion ranged from Comcast trying to better penetrate non-coastal markets that have a lower rate of on-demand video use to Comcast looking to juice up its margin.
Given flat growth, why not both reasons?
Watch Out For The Killer App
What the pay TV services need to watch out for is the killer app for cable cutters. In the transition from land lines to cell-only for my home phone, it was the E911 service that made the decision for us: making sure emergency services knew exactly where we were calling from was very important.
I suspect that a similar killer app for cable-cutters will be a way to get access to live sports content. Yes, you can get content from MLB, NHL or the NBA – but special events or sports that are not covered by these media packages can be a hassle to watch.
I myself am lamenting the ongoing coverage of the NCAA Women’s Basketball tournament on the ESPN channels this month, because I can’t watch Notre Dame progress through the tournament. Unless one of the over-the-air networks broadcasts a game, I’m out of luck. Unless, I get cable again.
Sports are perhaps the biggest reason (on the content side) holding people back from switching away from pay TV. If a network like ESPN or the new Fox Sports Channel were to take its oh-so-important broadcast rights and offer its content to Internet subscribers directly, that would probably be a nightmare scenario for pay TV companies.
It’s hard to imagine a situation where that would happen today, but if sports networks see a chance to make more revenue without giving TV providers a cut, would they take the shot?
Image courtesy of Shutterstock
View full post on ReadWrite