Premium customizable HG Servers
Our customizable servers
Our premium customizable (HG) server series is designed to support resource-intensive production environments, and offer high service availability. These servers are perfect for supporting workloads that involve big data, machine learning and critical databases.
Explore new configurations in our 2019 series!
The advantages of a Premium HG Server
High availability for critical usage
HG Dedicated Servers have the highest guarantee level possible, with an architecture that has been made redundant through electrical circuits, water-cooling, network and power. With this series, you can also replace disks without any service interruptions.
Hot-swapping: Changing disks while your server is running
All of the different hard disks and SSDs (SAS, SATA) can be replaced without any need to reboot your server. As a result, if you need to increase your server's disk capacity, for example, you will not experience any service interruptions.
Our HG servers are highly customizable and multi-purpose, so they can be adapted to fit a wide range of needs and workflows. You can choose one or two processors, the RAM quantity, the RAID type (softRAID or hardRAID) and a storage support that is best adapted to your projects. You can also equip your servers with graphic processing units (GPUs). This way, your customizable server will be able to suit all your needs.
The vRack private network
To ensure that data can be transferred securely between servers within an infrastructure, it is important to rely on a private, physical and secure network.
With our vRack private network, you can establish direct, physical connections between all of your servers. You can balance loads, isolate your databases, and connect your nodes and servers using our 3?Gbit/s (soon to be 10?Gbit/s) private network.
High quality for high performance
With 19 years of experience designing servers, OVH can guarantee you the very highest performance. We carefully select the best hardware components on the market, then test them rigorously after they are assembled, so that we can offer you the very best products. Focus on your business by relying on our premium servers.
Uses for a HG Server
Optimize your workloads and data analysis, by creating clusters based on servers that have balanced resources, and a high storage capacity.
Build your infrastructure of servers to manage data variety and speed, and apply analytical processing to the data in order to harness its full value.
Make the very most of Apache Hadoop and Apache Spark with Cloudera, Hortonworks and MapR.
AI and machine learning
As artificial intelligence rises in popularity, companies need increasingly powerful machines for high-performance computing (HPC).
High computing capacity is essential for handling the computing loads required for artificial intelligence, such as neural network modelling, image recognition, structural analysis and fraud detection.
The computing power required to train neural networks is most often provided by Nvidia P100 graphical processors.
Hadoop is an open-source framework that relies on Java language, and is designed to help users build distributed, scalable applications. Hadoop manages the processing of high-volume data within distributed environments.
It works in clusters, and its filesystem supports a higher rate of data transfer between nodes. As a result, it ensures that the system does not experience any service interruptions if one of the two nodes experiences a technical failure.
Spark is an open-source framework for distributed computing. This solution offers a software framework for processing big data to be used for very large-scale complex analysis.
Its main advantage is speed, as it enables programs to be launched 100 times faster than Hadoop MapReduce in-memory, and 10 times faster than launching on a disk.
It is also easier to use, and suits a wider range of purposes than other frameworks.
Elasticsearch is a powerful search and analysis engine, based on an Apache software project. It is a free software, written in Java.
It provides a distributed, multi-tenant capable full-text search engine with a RESTFul API, with entities backed up as JSON documents.
It can resolve an increasingly high number of tasks due to its scalability, with a clustering and load balancing system. It can rebuild data that was lost because of a defective node, for example-.