Posts tagged Stanford
Stanford University has a lot of smart people. But the school’s nickname is dumb: the Stanford Cardinal. As in the color red. How unimaginative. Here’s a better idea: the Stanford Entrepreneurs.
Yes, harder to fit on a souvenir coffee mug but a much more accurate handle. A recent study by business research firm CB Insights shows Stanford dominates all other universities in the field of alumni entrepreneurship.
The first-ever University Entrepreneurship Report tracks companies founded by or led by alumni (and dropouts) from six top U.S. schools – Stanford, Harvard, UC Berkeley, New York University, University of Pennsylvania and MIT – and the funding they’ve received.
Stanford alums have raised $4.1 billion in 203 financings. Harvard alums are second, at $3.8 billion in 112 financings. But strip out Facebook and Harvard funding drops to just $1.8 billion.
“We did go into the study thinking Stanford would stand out from the crowd but we were surprised at how much they dominated,” says CB Insights CEO Anand Sanwal. “The level by which they have a lead was more eye-popping than we imagined.”
The University Entrepreneurship Report is of interest to a lot of different groups. Schools like Stanford and Harvard, which used to churn out a lot of investment bankers but have pivoted from that focus, want to see if their shift is paying off.
Where Are The Hotshot Startup Founders?
Investors, of course, want to know where to find the hotshot entrepreneurs.
And the report is of interest to local groups because it also measures “alumni leakage.” Are smart students soaking up community resources then leaving town to start companies and employ people elsewhere? Stanford and Cal alumni usually stay put, which is not surprising since Silicon Valley is next door. Harvard alums tend to start companies in places other than Massachusetts – mostly Silicon Valley and New York – but MIT grads are less mobile.
“That speaks to the nature of the startups coming out of those universities,” Sanwal says. “When you’re dealing with the hard sciences, like at MIT, you might need ongoing access to the specialized talent of the universities and the professors, so you might stick closer to your alma mater. Whereas if you’re in tech or social media, you don’t necessarily need to be tied to Boston, so you’re more inclined to go where the money is.”
A Different Kind Of Diversity
At all six universities studied, tech startups attracted the most funding. But Berkeley and MIT alums founded more companies in industries outside of tech.
“We were surprised by the diversity of startups coming out of Cal and MIT and the general lack of diversity of startups elsewhere,” Sanwal says. “I expected to see more overall in the life science and clean tech realms at the other schools. I think Cal and MIT are more progressive and forward-thinking when it comes to commercialization of research.”
What About Your School?
If you’re wondering why your own alma mater is not included in the report, CB Insights chose these six schools because they have the most data. They’re also the six with the most entrepreneurial alums.
“We’ve had a lot of other universities calling us and asking why they’re not in the report,” Sanwal says. “It really came down to how much data we could capture. A lot of people said why didn’t we include University of Chicago, because that’s where Groupon started. We wanted schools where there was a consistency of funding across many companies, not just a blip with one hot company.” (Or, these days, not so hot.)
Why Stanford Wins
So why is Stanford the big winner? A lot of credit goes to the school’s emphasis on technology. A lot also goes to the many angels and VCs who live in the neighborhood. They like to support the home team, as do investors everywhere.
And what about students who want to found tech companies some day. Does this study point the way to the best colleges for them? Sanwal seems to think so: “This begs the question: do you have to go to a top school to get funding for your startup?” he said. “I don’t see that changing. It’s not just what you know but who you know.”
View full post on ReadWrite
Students will get about two hours of video content per week, though broken up into chunks of about 12 minutes (or smaller). They’ll also get quizzes from the videos and standalone quizzes, as well as programming assignments.
Here’s the description of the course:
Students will learn how to reason about the security of cryptographic constructions and how to apply this knowledge to real-world applications. The course begins with a detailed discussion of how two parties who have a shared secret key can communicate securely when a powerful adversary eavesdrops and tampers with traffic. We will examine many deployed protocols and analyze mistakes in existing systems. The second half of the course discusses public-key techniques that let two or more parties generate a shared secret key. We will cover the relevant number theory and discuss public-key encryption, digital signatures, and authentication protocols. Towards the end of the course we will cover more advanced topics such as zero-knowledge, distributed protocols such as secure auctions, and a number of privacy mechanisms. Throughout the course students will be exposed to many exciting open problems in the field.
A background in discrete probability is also said to be helpful. If you want a free course in crypt at your leisure, this sounds like a great option.
Boneh is the head of the applied cryptography group at Stanford, and has focused on applications of cryptography to computer security. He’s editor of the Journal of Cryptography and the Journal of the ACM.
View full post on ReadWriteWeb
If you were to give a moment’s thought to the question of what’s keeping high-volume storage centers — especially the ones with cloud architectures — from replacing their multiple, redundant disk arrays with memory, you’d have time left over after you’d concluded the answer was cost. But what if the cost differentiator was getting smaller every year? Would there be a point in time, in the foreseeable future, where the practical costs of running a data center made purely of DRAM would be equivalent to one that uses traditional disk arrays?
A study just released by Stanford University (PDF available here) has come to the incredible conclusion that we may already be crossing that threshold today.
If you factor into the equation the principle that the cost of storing data to any medium increases as the reliability of that medium decreases, as a team led by Stanford University computer science professor John Ousterhout has done, you may conclude that a big bank of DRAM (no, not flash, but volatile memory) would be less expensive to acquire and maintain over a three-year period for less-than-huge data sets (in other words, not Twitter-sized) than a big array of disks for the same purpose.
What Prof. Ousterhout’s team has done is apply the lessons learned from modern private cloud operating systems such as OpenStack with respect to storage arrays, to memory. The first is to stick with cheap, readily available “commodity” parts. The second is to presume that such parts are prone to failure, so you plan for massive redundancy — because several components rarely fail at exactly the same time.
The rule the Stanford team cites that leads directly to their conclusions is referred to as “Jim Gray’s Rule,” although many hardware designers know it as “the five-minute rule” (Queue magazine page available here). The rule is named for researcher Jim Gray, who last served at Microsoft before becoming lost at sea in 2007. In 1987, the Gray Rule applied to whether a frequently accessed element of data should be maintained in a memory cache or written to disk. Its basic principle is, within a five-minute window, data that’s accessed very frequently can and should be stored in memory.
In a modern context, as the Ousterhout team explains it, the Gray Rule looks like this: “If portions of the disk are left unused then the remaining space can be accessed more frequently. As the desired access rate to each record increases, the disk utilization must decrease, which increases the cost per usable bit; eventually a crossover point is reached where the cost/bit of disk is no better than DRAM. The crossover point has increased by a factor of 360x over the last 25 years, meaning that data on disk must be used less and less frequently.”
A chart representing which storage technology has the lowest overall cost of ownership for varying dataset sizes and frequency of access. [Courtesy Stanford University Computer Science Dept.]
In a 20-year span of history, the size of the largest commercially available hard disks has expanded from 30 MB to 500 GB. Over that same span, the time required for a system to read an entire disk full of single-kilobyte (1 KB) blocks from edge to center has increased by a far greater factor, from five minutes to 30 hours. This points to an unavoidable problem with the physics of hard disk drives: While manufacturers can increase areal density with the net effect of reducing transfer rates, those sustained rates are only appreciated when reading very large blocks. The time to read a whole disk full of large blocks has only increased from 15 seconds for a 1987-era 30 MB drive to 1 hour 23 minutes for a modern half-terabyte drive.
Databases are not comprised of very large blocks; it’s these very small records where latency is introduced most often. And for so-called “NoSQL” databases that use vertical storage to improve logical access times, some of that speed gain is lost anyway due to latencies that simply are not replicated by DRAM.
The team’s stunning conclusion is this: “With today’s technologies, if a 1 KB record is accessed at least once every 30 hours, it is not only faster to store it in memory than on disk, but also cheaper.” The report goes on to say that, for that block to be accessible at the same rate on disk as in DRAM, the disk utilization rate must be capped at 2%.
Stanford’s name for a system that utilizes all-DRAM with controls based on cloud disk array techniques, is RAMCloud. (Insert Sylvester Stallone poster here.) The research team applied its results to a modern model of a typical online retailer, with $16 billion of annual revenue. At an average order size of $40, it probably processes 400 million orders per year. The maximum record size for an individual order is probably 10 KB. That means at most, it processes about 4 terabytes of orders per year, which is not anywhere near a Facebook-scale database.
A Stanford-style RAMCloud configuration for maintaining just that database over a one-year period (not counting other applications, VMs, and data) would cost that retailer about $240,000, by the team’s estimate.
“Numerous challenging issues must be addressed before a practical RAMCloud can be constructed,” the team concludes its report. “At Stanford University we are initiating a new research project to build a RAMCloud system. Over the next few years we hope to answer some of the research questions about how to build efficient and reliable RAMClouds, as well as to observe the impact of RAMClouds on application development.”
View full post on ReadWriteWeb
Today at the Strata conference The Stanford Visualization Group debuted a Web-based visual tool for cleaning up messy data called DataWrangler. According to its website, “Wrangler allows interactive transformation of messy, real-world data into the data tables analysis tools expect.” Data can be exported as a CSV or TSV or as JSON data.
Data wranglers can use the tool with the group’s data visualization tool Protovis, or with tools such as Excel, R and Tableau.
Another thing I often hear is that a large fraction of the time spent by analysts — some say the majority of time — involves data preparation and cleaning: transforming formats, rearranging nesting structures, removing outliers, and so on. (If you think this is easy, you’ve never had a stack of ad hoc Excel spreadsheets to load into a stat package or database!)
Putting these together, something is very wrong: high-powered people are wasting most of their time doing low-function work. And the challenge of improving this state of affairs has fallen in the cracks between the analysts and computer scientists.
View full post on ReadWriteWeb
Search and news stories are very closely related. In this age of watching TV with laptop at hand, we tend to search for things we see on the TV – and the CES flood of internet TV applications and hardware attests to this – it is no wonder the predicted top of the NFL draft decision to stay in school would get people searching for more information.
Google trends shows how much this one news article has swamped the search engine,
View full post on Search Engine Watch Blog