OREANDA-NEWS. February 27, 2012. Fujitsu today announced the development and immediate availability of Interstage Big Data Parallel Processing Server V1.0, a software package that substantially raises reliability and processing performance. These enhancements are made possible by using Apache Hadoop(1) open source software (OSS)—featuring Fujitsu's proprietary distributed file system—for parallel distributed processing of big data. The new software package has the added benefit of quick deployment.

By combining Apache Hadoop with Fujitsu's proprietary distributed file system, which has a strong track record in mission-critical enterprise systems, the new solution allows for improved data integrity, while at the same time obviates the need for transferring data to Hadoop processing servers, thereby enabling substantial improvements in processing performance. Moreover, the new server software uses a Smart Set-up feature based on Fujitsu's smart software technology(2), making system deployments quick and easy.

Fujitsu will support companies in their efforts to leverage big data by offering services that assist with deployment, including for Apache Hadoop deployment, and other support services.

In addition to being large in volume, data collected from various sensors and smartphones, tablets and other smart devices comes in a wide range of formats and structures, and it also accumulates rapidly. Apache Hadoop, an OSS that performs distributed processing of large volumes of unstructured data, is considered to be the industry standard for big data processing.

The new software package, based on the latest Apache Hadoop 1.0.0, brings together Fujitsu's proprietary technologies to enable enhanced reliability and processing performance while also shortening deployment times. This helps support the use of big data in enterprise systems.