Posts tagged Solving
Maker Studios is the fifth most popular YouTube partner in the U.S., according to statistics reported today by Nielsen. Maker Studios models itself on Hollywood movie studios, except it produces content for YouTube instead of the Big Screen. It’s a great concept, but unfortunately low-brow fare like William The Psychic isn’t going to help YouTube transition to professional content.
Maker Studios pulled in 9.7 million unique U.S. viewers in May, averaging nearly ten streams per viewer. Cofounder Danny Zappin told FastCompany earlier this month that Maker Studios aims to build “a sustainable new kind of studio model for short-form content.”
The company has 200 employees and a 20,000-square-foot studio lot, which Zappin described as “somewhere in the middle between a random few cameras in an apartment, and a giant studio.” The business model is fairly simple: Maker Studios recruits “talent” and gives them the resources to produce a YouTube show. In return, Maker gets a portion of the ad revenue generated on YouTube.
Maker Studios has over 500 channels, many of them home to aspiring YouTube stars. One of its established stars is Ray William Johnson, who has 5.4 million YouTube subscribers. Johnson’s show is like an ‘America’s Funniest Home Video’ for viral YouTube videos. In every episode, the gregarious host with Bart Simpson hair commentates over videos such as ‘How To Eat a Fly With a Straw’ and ‘Making The Bus Monitor Cry’ (a sickening video showing school kids bullying a grandmother).
The Moms View is another Maker Studios production. This one is like Oprah for YouTube, with three attractive moms doing makeovers, book reviews and other content that supposedly appeals to mainstream moms.
Tutele is a popular Maker Studios channel that produces sketches and entertainment in both English and Spanish, such as a skit called William The Psychic. Now I may be missing some cultural references, but William The Psychic is just bad content. Or maybe it will become a cult show, in a ‘it’s so bad, it’s good’ way. Who knows…
There’s no doubt that Maker Studios is a success so far, coming in fifth in Nielsen’s ranking of YouTube corporate partners. But I came away less than impressed with the quality of Maker’s content, which is lowest common denominator stuff.
Let’s quickly compare Maker Studio to Machinima, which was third in Nielsen’s list. Machinima is targeted at gamers and features a wide range of programming for that audience – tips and tricks videos, gaming news shows and original TV-style programming like the drama show Bite Me. Machinima may target a young audience, but it’s never juvenile and unoriginal like some of the Maker Studios content I viewed today.
Maker shows like Ray William Johnson and The Moms View is the type of content I go out of my way to avoid on TV. This isn’t the future of television, it’s the worst aspects of day-time TV rehashed for YouTube. That surely isn’t what YouTube wants to portray to advertisers, as it tries to compete with the big TV networks.
View full post on ReadWriteWeb
Data transfer speeds have been getting faster and faster, but that doesn’t mean that we’re actually reaping the full benefits. A few years ago, Jim Gettys put his finger on the “criminal mastermind” behind poor networking performance. Dubbed Bufferbloat, the problem was not a simple one to solve. Not simple, but Controlled Delay (CoDel) active queue management (AQM) may provide serious progress towards a solution.
The problem, in a nutshell, is that TCP was not designed with today’s bandwidth in mind. As Gettys wrote in “the criminal mastermind” post, the problem lies “end-to-end” in applications, operating systems and home networks. Buffering is necessary, but too much buffering is a problem. And today’s devices and operating systems are doing too much buffering – which is degrading performance. Says Gettys, “TCP attempts to run a link as fast as it can, any bulk data transfer will cause a modern TCP to open its window continually, and the standing queue grows the longer a connection runs at full bandwidth, continually adding delay unless a AQM is present.”
The Linux Tips page on the Bufferbloat wiki highlights the scope of the problem. According to the page, buffers can “hide” in the operating system layer (Linux transmit queue), device driver, hardware (which has buffers of its own), and on and on. One of the long-term solutions to bufferbloat is active queue management (AQM), and the Controlled Delay (CoDel) AQM proposed by Kathleen Nichols and Van Jacobson might be a big piece of the puzzle.
CoDel To the Rescue?
CoDel (pronounced “coddle”), is being called a “no-knobs” AQM. That means users and admins aren’t expected to tweak any parameters to get best performance out of CoDel. According to the paper published on ACM Queue, “CoDel’s algorithm is not based on queue size, queue-size averages, queue-size thresholds, rate measurements, link utilization, drop rate or queue occupancy time. Starting from Van Jacobson’s 2006 insight (PDF), we used the local minimum queue as a more accurate and robust measure of standing queue.”
More importantly, CoDel promises to distinguish between “good” queues and “bad” queues. It’s supposed to minimize delay without hampering bursts of traffic. “The core of the bufferbloat-detection problem is separating good from bad … good queue is occupancy that goes away in about one RTT (round-trip time); bad queue persists for several RTTs. An easy, robust way to separate the two is to take the minimum of the queue length over a sliding time window that’s longer than the nominal RTT.”
Finally, CoDel is supposed to be suitable for deployment in a wide range of devices. The paper says CoDel is “simple and efficient,” and can be deployed in low-end devices or “high-end commercial router silicon.”
Proof is in the Pudding, er, Deployment
A fair amount of testing has been done on CoDel, but the proof has to come via real-world deployments. According to the paper, Nichols and Jacobson performed “several thousand simulation runs” that showed CoDel “performed very well” with results “compelling enough to move on to the next step of extensive real-world testing in Linux-based routers.”
Note that deploying this in a home-based router may not be enough to rid yourself of bufferbloat. The researchers point out “a savvy user could be tempted to deploy CoDel through a CeroWrt-enabled edge router to make bufferbloat disappear. Unfortunately, large buffers are not always located where they can be managed but can be ubiquitous and hidden. Examples include consumer-edge routers connected to cable modems and wireless access points with ring buffers. Many users access the Internet through a cable modem with varying upstream link speeds. … The modem’s buffers are at the fast-to-slow transition, and that’s where queues will build up: inside a sealed device outside of user control.”
So, don’t expect CoDel to solve the bufferbloat problem overnight, or even by the end of the year. Gettys says that “work to integrate adaptive AQM algorithms into wireless systems work will take months or years, rather than the week that initial CoDel prototype implementation for Ethernet took. But at least much testing of the CoDel algorithm, experimentation, and refinement can now take place.”
There’s also the matter of tackling wireless, which Gettys says “may be much more difficult, both because queuing is sometimes much more complex than Ethernet, but also since packet aggregation has resulted in OS/driver boundaries hiding information that is necessary for proper functioning.”
But CoDel is still very good news, and shows that the community that’s come together around the bufferbloat problem is making progress. Bufferbloat won’t be solved in one fell swoop by a single breakthrough, but with a number of technology improvements over time.
View full post on ReadWriteWeb
College is stuck in the past, and tech is always trying to tow it out of the mud. The trick is finding a solution that provides more access to higher education, improves the learning experience, and enables future improvement, instead of miring college in some company’s proprietary system. Coursera has such an offering, and it announces today that some of the world’s top universities will participate in its experiment.
Princeton; Stanford; the University of California, Berkeley; the University of Michigan and the University of Pennsylvania will all offer courses on the platform for free to anyone in the world with Internet access. To help bring Coursera up to speed, Kleiner Perkins Caufield & Byers and New Enterprise Associates have backed it with $16 million in venture funding.
“We see a future where world-renowned universities serve millions instead of thousands,” says Coursera co-founder Daphne Koller. “Our mission is to teach the world and make higher education available for everyone,” says her partner, Andrew Ng.
“By partnering with the world’s leading universities, we’re making college-level classes more accessible to anyone who wants to learn,” Koller says.
And it’s not just learning by rote. Coursera is a platform for instruction, discussion and grading at Internet scale. It extends the influence of universities around the world, and it provides them data-driven insights into how to adapt higher education to the global promise of the Internet.
More Than an Afterthought
Many top universities, including Yale and MIT, offer lectures online for free. The Coursera cofounders call that “the afterthought model.” It doesn’t threaten the established order to put lecture videos up on iTunes, because the experiential and interpersonal parts of learning are missing.
Koller explains that the lecture model was invented out of technological necessity. In the medieval university, the professor read the only copy of the book aloud, and students took notes. That basic format persists today, even though the technological constraints seem absurd in today’s classroom.
Coursera is an education technology born out of the practice of teaching. Cofounders Koller and Ng are Stanford computer science professors. Last fall, they developed what would become Coursera as Stanford’s first online education platform. It offered two computer science classes online. 200,000 people enrolled. By this spring semester, over one million students around the world have enrolled in more of these courses.
Some of Coursera‘s new courses:
- Internet Technology and History – University of Michigan
- Networked Life – University of Pennsylvania
- A History of the World Since 1300 – Princeton University
- Fantasy and Science Fiction: The Human Mind, Our Modern World – University of Michigan
- Listening to World Music – University of Pennsylvania
- Introduction to Genome Science – University of Pennsylvania
- Cryptography – Stanford University
- Machine Learning – Stanford University
- Computer Vision – University of California, Berkeley
- Design and Analysis of Algorithms I – Stanford University
- Software Engineering for SaaS – University of California, Berkeley
College at Scale
The lecture is just one piece of a Coursera course, and it’s not the most important. The two essential elements are peer grading of assignments and the class forum. These present the technical challenge to delivering a meaningful classroom experience at Internet scale.
The Coursera grading technology is good at crunching structured output. It was easier when the courses were focused on computer science and engineering because student work could be easily tested and quantified. But for this launch, Coursera has figured out how to implement its technology for humanities courses as well using peer grading.
The professor comes up with a grading rubric for an assignment and gives it to students – after they submit their work – along with practice grading exercises. Once the students have completed the training, they’re qualified to grade each other. The process uses theory from crowdsourcing technology like Amazon Mechanical Turk.
Coursera has demonstrated that its peer grading can be about as accurate as your typical university teaching assistant. But unlike the T.A., it can grade 200,000 papers.
The other key technology in Coursera is its forums. No manual system could moderate a class discussion with hundreds of thousands of people in it. Coursera’s forum technology identifies and parses duplicate questions, and it auto-suggests related questions as students type. It also uses a Stack Overflow-style reputation system to surface the best conversations.
Is It Time to Use the Word “Disruptive?”
Clayton Christensen’s notion of “disruptive innovation” gets tossed around in tech all the time, and it’s usually too optimistic. That’s especially the case in education technology. Education is too important – and its cultures are too deeply entrenched – to transform overnight with one idea.
We should save the D-word for when an innovation is demonstrably changing the world. But higher education’s stagnant methods and skyrocketing price of access could certainly use some disruptive innovation.
The founders of Coursera recognize that online education is immature. There are essential curricular questions that haven’t been answered yet. How much reading and writing should students be asked to do? How much social interaction is enough? Too much? What is the value of face-to-face learning versus interacting in a forum?
Doesn’t every institution, every course and every instructor have a different balance of answers to these questions?
But that’s exactly the value Coursera stands to offer. It will help online education scale. There’s more data available at this free, worldwide level.
If 2,000 people get an answer wrong the same way, that’s a strong signal to the professor that there’s a conceptual problem in the instruction. In response, Coursera can automatically generate messages to students to correct that error.
This platform was designed by trained researchers in machine learning. Coursera is not just an innovative classroom model. It’s a system for analyzing the effectiveness of college. It will be great to see the outcomes of this array of new, free, worldwide courses.
As for the on-campus experience, hopefully this new online classroom model will free up departments, instructors and students for more face-to-face work. The Internet is always present. Maybe the real value to being on campus, the one worth paying for, is the part that happens offline.
You can learn more and find the course offerings at Coursera.org.
Photos via Shutterstock
View full post on ReadWriteWeb
Fifty billion Internet-capable devices – if that indeed is the number – capable of communicating sensor data through the networks we use today, probably won’t have Ethernet plugs. And if they’re mobile by nature, they won’t rely on Wi-Fi routers. If soon there are more devices communicating over the Internet than there are people, states the general presumption since the 50 billion projection was first quoted last year, there simply isn’t that much wireless spectrum to cover it all.
This is where this story would end if we all put our faith in presumptions instead of technology. Last January, the Organization for Economic Cooperation and Development (OECD) published a report (PDF available here) that foresaw a world of intercommunicating devices as a critical component of a healthy global economy. It noted the term “Internet of Things,” but settled upon the more industrial term for the concept, machine-to-machine communication (M2M). The report created use cases for M2M devices that were as simple as automotive speedometers registering relative speed, perhaps to other devices within the same car, to brake monitoring systems that communicate a car’s relative ability to stop to insurance companies. But the system that could make M2M both ubiquitous and inexpensive, the report made clear, is ironically the same system that carriers like AT&T are begging to decommission: the 2G network.
Winding Down and Winding Up
2G wireless technology could be the most convenient, most efficient, and most ubiquitous communication network for M2M devices presently available, the OECD report claims. However, the world’s major wireless carriers have either begun decommissioning their 2G systems, or are planning to.
As the OECD report reads, “2G networks are scheduled to be decommissioned and replaced by 4G networks in the coming five to 15 years. Building an M2M solution that only functions on 2G may not be future proof. However, there are very few or no 4G modules available and it is not expected that 3G coverage will become universal.”
Mobile Network Operators (MNO), the report makes clear in multiple places, are built around service to people. The driving force in delivering service is customer satisfaction, which for people often consists of somewhat more than an ACK signal or the lack of one. So not only the technical but the financial infrastructure of wireless networks would need to change if they are to address the demands of a device-driven network. For example, one customer may operate tens of thousands of simultaneous devices, as opposed to the average three. Imagine how the billing system would have to be reconfigured. And when devices in those networks cross one another’s territories, consider the roaming agreements that MNOs would need to make with one another. “For many MNOs the systems aimed at supporting service to consumers are not capable of meeting the demands of M2M users,” says the report.
But even these factors will become moot points if the underlying network ceases to exist. This from Alex Brisbourne, the CEO of KORE Wireless, one of North America’s major current providers of M2M technology to carriers and infrastructure support systems. As wireless carriers are busy winding down their 2G services in preparation to shut them off, says Brisbourne, the spectrum frontiers for M2M are being relocated to 3G and 4G systems that are neither ubiquitous, consistent, or relatively cheap.
“Folks have got to stand up and start thinking about how they’re going to deal with 3G and beyond in M2M,” Brisbourne tells ReadWriteWeb in a recent interview. “On the other hand, I think the carriers need to be more affirmative and communicative with regard to their planning horizons on 2G and 3G support. Otherwise we’re going to see people, frankly, getting their fingers burned. They’re going to deploy [M2M], and suddenly find people have taken their network away.”
Major carriers see the remaining benefit of 2G technologies in terms of amortization – how long they can continue to offset expenditures. It will be difficult to persuade carriers to keep 2G going with the promises of revenues from M2M services alone – revenues that could be “microscopic,” says Brisbourne, in comparison with what carriers reap from data plans from human beings. “Clearly the slaves that all the carriers are playing to, honestly, are smartphones and their customers who are being locked into $50 to $80-per-month contracts.”
But the argument that there isn’t enough spectrum anywhere to accommodate the flood of incoming M2M devices, Brisbourne believes, is something of a straw man. KORE Wireless’ projections indicate, he tells us, that assuming M2M adoption grows at the high end of analysts’ projections, by the year 2016, all M2M communication worldwide may still be manageable on a total of 5 MHz of spectrum – “a tiny sliver of the network.”
A Little Practical Incentive
Brisbourne cautions us that he does not perceive the carriers’ motives, placing people above things, as being “disingenuous” in any fashion. “I think they’re driven by very, very practical business considerations, and the efficiency of the licensed spectrum is one of them,” he says. But the CEO has an idea, and it starts like this: “The average Joe in the streets doesn’t need to have a 2G phone.”
It involves a kind of going-away present. “Some of these people have still got a Nokia brick dating back 10 years, that have got things like a dialpad on it… If I want to get rid of the last 5 million 2G phones in the world that are on my network, all I do is give them the incentive to come in and get a nice, bright, shiny new Samsung free of charge, and all those people will flock in and take it,” he suggests. “It’s just a matter of incentivizing people to come in and do it.” In fact, he adds, it may be prudent for carriers to offer those dialpad users simple feature phones as an option instead of smartphones – because maybe they don’t want smartphones anyway.
Once the phones are replaced, carriers can then act like the 2G nodes are being shut down like VHF TV channels a few years ago… only not actually do it. “What you possibly do is keep it, almost unbeknownst to many, for certain aspects of 2G.” Brisbourne likens it to the maintenance of circuit-switched data in telephone systems – a half-century-old system still in use today.
Even if carriers don’t go for this idea, the CEO says, carriers need to come up with some explicit plan for adopting and maintaining M2M. And if it’s 3G instead of 2G, that decision needs to be made now, so MNOs, device makers, and service operators can build consistent services today.
“You can’t sit there saying, ‘My feet are dry, my feet are dry… oh, God, I’m up to my knees in water!’ as the tide keeps on coming in,” says Brisbourne, in his classic storytelling mode. “If the tide is coming in and the move is toward 3G, then people with M2M services that depend on wireless connectivity for their very viability need to plan to encompass it. And for that, they need to have a clear, publicly stated direction – not behind-the-scenes murmuring about the timeline for those services to be in play. Otherwise, we do the industry a disservice. And in my opinion, the carrier community is not doing as robust a job on this as it ought to be.”
View full post on ReadWriteWeb
Pagination has always been a sticky problem for search engines. While not nearly as complex as faceted navigations for SEO, they can certainly cause crawling inefficiencies and excessive duplicate content. They can also create problems with undesirable pages ranking for important terms, in cases…
Please visit Search Engine Land for the full article.
Five-Step Strategy For Solving SEO Pagination Problems
Search Engine Land
While not nearly as complex as faceted navigations for SEO, they can certainly cause crawling inefficiencies and excessive duplicate content. They can also create problems with undesirable pages ranking for important terms, in cases where the search …
10 Steps to a Successful SEO Migration Strategy
View full post on SEO – Google News
Organizations that consider virtualization often face issues that keep the operation from running at top performance. Server sprawl is a big problem that when addressed often allows companies to dramatically reduce the number of servers it maintains.
The following case studies all demonstrate the opportunities that come when the decision is made to consolidate servers by implementing modern virtualization technology.
American Institute of Certified Public Accountants
Frederick Memorial Healthcare
If you a) believe in the power of search—as a technology, as a consumer experience and as a business model, and b) believe in the economic promise of the “local” opportunity, then you need to commit to a simple axiom: the ultimate destiny of a local search is a phone call.
Two relatively new macro trends [...]
*** Read the full post by clicking on the headline above ***
View full post on Search Engine Land: News About Search Engines & Search Marketing