Is Beijing Genomics Institute to establish Chinese dominance in genomics market?


I am well past due to post on the New Yorker’s article about B.G.I. The article is by subscription only, and here I am citing the most interesting parts.

B.G.I., formerly called Beijing Genomics Institute, the world’s largest genetic-research center. With a hundred and seventy-eight machines to sequence the precise order of the billions of chemicals within a molecule of DNA, B.G.I. produces at least a quarter of the world’s genomic data—more than Harvard University, the National Institutes of Health, or any other scientific institution.

the company has already processed the genomes of fifty-seven thousand  (57000) people. B.G.I. also has sequenced many strains of rice, the cucumber, the chickpea, the giant panda, the Arabian camel, the yak, a chicken, and forty types of silkworm.

The company was founded in 9/9/1999 at 9:19 am in Beijing, China. It has now 4000 employees of an average age of 26, is located in Shenzhen nearby to the infamous Foxconn factory, and operates on a $1.58-billion loan from the China Development Bank, including multiple nonprofit and commercial projects, such as DNA sequencing 10000 people from the families with autism in the US and a thousand of obese and healthy people in Denmark. The BGI’s plans include the Million Human Genome Project, the Million Plant and Animal Genomes Project, the Million Microecosystem Genomes Project, and the controversial Cognitive Genomics project, also millet (very drought-tolerant crop) and cassava projects, both holding a big promise for feeding China and Africa.

BGI is the biggest customer of Illumina, which has sold BGI 130 sequencers for half a million dollars each (my guess, that would be HiSeq 2000 and HiSeq 2500; the latest and the most powerful HiSeq X Ten, released in 2014, costs about a million). When in 2013 BGI bought the main Illumina’s competitor Complete Genomics, Jay Flatley, Illumina’s CEO, said: “It is one thing to sell Coke and another to sell the formula for Coke. … when they bought Complete Genomics … they were allowed to … buy the formula.”

The article concludes discussing the Cognitive Genomics project, which goals are to select intelligent high-IQ embryos, to find cure for Alzheimer’s and to map the brain: At some point … people will look back and wonder what all the fuss was about [Chris Chang, a visiting scholar at BGI].”


TechNavio’s market analysis: 14 trends to revolutionize data centers


TechNavio (the London based company covering the global market for more than 500 technologies across 80 countries) has analysed trends and reported the top 14 trends expected to have an enormous impact on the data center industry in the coming years. 

It is about more and more racks (“the data center rack market revenue crossed the billion dollar mark in 2013”), high density servers, micro servers (“The unit shipment of micro servers tripled in 2012 and doubled in 2013. Additionally, market revenue is expected to grow more than 50 percent yearly until 2018. Moreover, they are expected to gain more traction in the market as the demand for server efficiency and low-power architecture increases along with growth of Web 2.0 companies.”), 40G/100G and beyond Ethernet, “using less expensive, active, concerted, and adaptive methods to analyze and share data”, both structured and unstructured.

What are Gaming Big Data good for?


According to GamesBeat, companies now measure and on-the-fly adjust to a player many parameters in a game, like onboarding techniques, the time to rich a specific level, the rate at which players can pick up goodies, etc.

“Gaming companies are now manipulating all of these variables as needed to ensure gamers onboard, get engaged, and keep playing over the long haul. Because, of course, it’s all about retention. If you can retain players, you can monetize. If you can’t, you won’t make money.”

But in the long run, collection of gaming data is not only about retention. It is at much extent about monitoring the people’s gaming abilities and brain activities to design games for treating psychiatric and mood disorders, elevating pain, learning new skills and manipulating robots, sensors and nanobots (for example, inside the body). Gaming data are needed for learning robots and manipulating them. And gaming data will be eventually used for searching for individuals with unusual brains and motor functions — to learn more about human neurology, brain physiology and anatomy to design new medical treatments and to advance the human brain. What else can you imagine?

How Big Data will feed the world


If you ever worried (I do) about how to feed the growing population on Earth, here is a solution: become a data scientist. Usage of Big Data to predict weather, to increase crop production, and to reduce pollution will be indispensable tool in fighting the world hunger.

Big Data in 2014: Predictions by Guardian and Aerospike

BigDatain2014I liked this post from the Guardian blog about four predictions for Big Data in 2014. It was played pretty safe; and I expect this prediction to be just right. In addition, I think, we would see divergence of the Big Data topic by the fields: when people learn and become tired of technological terms, like Hadoop, noSQL, they will start talking about specialized applications and approaches. Then it will become really fascinating.

Aerospike, the company developing an in-memory noSQL database, offers seven predictions for Big Data in 2014 that play more on a technological side. It is no surprise that one of the predictions is that “database architectures with operational in-memory NoSQL databases in front of analytic data warehouse will become standard.” Could not notice that this is an exact solution I proposed in my recently pending NSF application for further development of our IntegromeDB search engine and integrated database.

Aerospike also predicts “a greater use of Amazon’s platform-as-a-service (PaaS) for analytics, with real-time delivery of applications from cloud systems that offer higher performance and reliability.” Indeed, the estimate is that “between now and 2017 the market for PaaS solutions will increase by 30 percent annually, to reach a turnover of $14 billion by 2017.” And another sound prediction is that “at least one major national retailer will begin offering a free API access for their data to drive development of mobile apps for shopping recommendations and delivery services.” Databases with no API will eventually become useless.

Data Centers (Cloud) market to grow to $200 Billion by 2020

The estimate of $200 Billion for the global cloud market is provided by IBM. Although it might be an underestimate as, in 2012, the European Cloud Computing Strategy outlined actions to deliver a net gain of 2.5 million new European jobs and an annual boost of €160 Billion by 2020, which is more than USA$200 Billion (at today’s exchange rate) for Europe alone, not counting for the US and Asian markets.

In 2014, enterprises will average $8M in investments in big data

Forbes reports on the IDG’s big data enterprise survey and predictions for 2014. The major take is that “storage (49%), servers (47%), cloud infrastructure (4%), discovery and analytics (43%) and applications (42%) are the top five areas of big data investment today.” The hiring market looks strong: a quarter of respondent companies plan on hiring data scientists, data architects, data analysts, and data visualizers.