Showing entries 191 to 200 of 211
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: big data (reset)
Elephants on a Trapeze: Keeping Big Data Agile

On April 1st, the Department of Computer Science at Rutgers University, where I am a professor, held an open house. I gave a talk called “Elephants on a Trapeze: Keeping Big Data Agile”.

The talk is an introduction to performance issues related to big data without getting too technical. You’ll have to decide if I succeeded with the “not too technical” part. My take is on how to keep big data indexed — not surprising since the work in this talk is the basis for TokuDB®, Tokutek’s MySQL storage engine for keeping large data indexed. A video of my talk can be found here.

Elephants on a Trapeze: Keeping Big Data Agile from Tokutek on …

[Read more]
Big Data is how big exactly?

I see that “Big Data” has become the new buzzword with a spike of hype around it. Everyone’s jumping on it. Companies are eager to promote their products as “Big Data,” just as they were eager to be associated with Web 2.0, Service-Oriented Architectures, and all the rest. Predictably, there’s basically zero agreement on what it means.

I’ve seen “Big Data” mentioned in the context of 1TB, which I think is rather moderate sized. But worse yet, I’ve seen 100GB labeled Big Data. I’ve even seen 5GB labeled Big Data. No links — I don’t want to draw attention to them.

I don’t know what Big Data is, but the stick-of-gum-sized flash drive in my pocket holds 16GB. It’s pretty Small. I mean, I forget it’s even there — it’s definitely not Big. I don’t know where I’d draw the line, but if it fits in a commodity server’s memory, which 100GB can do easily these days, it’s not Big Data. I don’t even …

[Read more]
Outliers and coexistence are the new normal for big data

Letting data speak for itself through analysis of entire data sets is eclipsing modeling from subsets. In the past, all too often what were once disregarded as "outliers" on the far edges of a data model turned out to be the telltale signs of a micro-trend that became a major event. To enable this advanced analytics and integrate in real-time with operational processes, companies and public sector organizations are evolving their enterprise architectures to incorporate new tools and approaches.

Whether you prefer "big," "very large," "extremely large," "extreme," "total," or another adjective for the "X" in the "X Data" umbrella term, what's important is accelerated growth in three dimensions: volume, complexity and speed.

Big data is not without its limitations. Many organizations need to revisit business processes, solve data silo challenges, and invest in visualization and collaboration tools to make big data understandable and …

[Read more]
Who/What to acquire next

Well as predicted, with Aster Data recently being picked up by Teradata most of the key new generation MPP distributed analytics vendors have been acquired (Aster Data, Vertica, Netezza & Greenplum).  This had to happen and was expected to happen.  The MPP Analytics startup “revolution” is over and these technologies will now be integrated into the mainstream.

So what’s next?  As we now, if you are a massive multi-national software company it is a lot less risky to incrementally innovate and leave the development of “game changing” technologies to startups that can be acquired after they prove both the tech and the market.  So what follows MPP? …

[Read more]
What’s hot in Big Data startups?

There are so, so many big data platforms in play at the moment it can be confusing for developers to know where to start.  For startups it used to be simple, MySQL, but dust clouds were created when all the NoSQL platforms started to crash the party 18 months or so ago.  But I do see the dust begin to settle and we are starting to see some market “leaders” appear.  A very unscientific approach is to list the technologies I hear about in the “big data startup” world on a daily basis.  These are, in no particular order:

  • MySQL - yes it is still very much hanging in there despite the Oracle acquisition.  MySQL has been helped by technologies such as AWS RDS and Xeround making it more digestible for big data startups who want to minimize operational overheads.
[Read more]
Q&A with Stephen Baker of "Final Jeopardy"

IBM's Watson natural language Question & Answer system made headlines recently with its primetime debut on Jeopardy.  Despite a few embarassing answers, Watson trounced top Jeopardy players Brad Rutter and Ken Jennings.  Watson is built from 90 IBM Power 750 IBM Linux servers with 16 terabytes of memory providing 80 Teraflops of processing power.  Watson is perhaps the most famous "Big Data" systems out there.  Watson's knowledge base consists of 200 million pages of text data that is pre-processed using  …

[Read more]
Some NoSQL Myths

I have been busy travelling recently but thought I would jot down a couple of NoSQL myths that are fresh in my head from my recent discussions.

  • Twitter use Cassandra internally but have not migrated their tweet store, despite their earlier plans to.  For now tweets are still stored in MySQL.
  • Despite the widely accepted view that the use of Cassandra led to Diggs issues a couple of Digg engineers have apparently discounted this.
  • Despite the widely accepted view that NoSQL databases all use eventual consistency this is not so.  HBase, for example, offers full consistency.
  • Despite the widely accepted view that NoSQL is only about unlimited distributed scalability this is …
[Read more]
How Real is the Data Deluge?

It seems obvious that given the decreasing cost of storage and computation, there's going to be a significant increase in the volume of data that organizations accumulate over the next 10 years.  But the type of data being accumulated may be different from the areas where traditional DBMSs dominated.  It's not just about transactions; it's search patterns, on-line behavior, click-thru data, events fired off by smartphones, messages over Twitter & Facebook, log data of various kinds.

If an organization can figure out a better way identify prospects, or deliver more targeted ads, or optimize pricing decisions by analyzing terrabytes of data, they'd be crazy not to. Over the long term, companies that don't develop these capabilities will be at a competitive disadvantage.

As to what the implications are from a …

[Read more]
The problem with a full box of big data tools

NoSQL”, for lack of better name, is a generic term that describes any data management system that does not use SQL as a query interface.  Generally this means any data management system that is non-relational, but the term also has also been stretched as far to include the boundaries of what constitutes a data management system at all (such as Hadoop).

Early on (a couple of years back in NoSQL time) when the term was coined I think the positioning was much more aggressive, but more recently this has been softened so now NoSQL is commonly quoted as meaning of “Not only SQL” or “next generation databases” (whatever that means).  The common message you get now is something along the lines of NoSQL systems are more “specialized”, each being designed to solve a smaller number of problems than the …

[Read more]
The SMAQ stack for big data

SMAQ report sections

→ MapReduce

→ Storage

→ Query

→ Conclusion

"Big data" is data that becomes large enough that it cannot be processed using conventional methods. Creators of web search engines were among the first to confront this problem. Today, social networks, mobile phones, sensors and science contribute to petabytes of data created daily.

To meet the challenge of processing such large data sets, Google created MapReduce. Google's work and Yahoo's creation of the Hadoop MapReduce implementation has spawned an ecosystem of big data processing tools.

As MapReduce has grown in popularity, a stack for big data systems …

[Read more]
Showing entries 191 to 200 of 211
« 10 Newer Entries | 10 Older Entries »