"Chairman, the second batch of 300 servers has passed all verification. After 30 days of actual operation, the general failure rate is 17%, the severe failure rate is 4%, and the stability rate is 97.79%!" reported the person in charge at the Jiangnan Network Center in Cao County to Huang He.
"Good, you've all worked hard!" Huang He nodded, very satisfied with the results.
Although a 7% failure rate sounded rather high, with an additional 1.8% severe failure rate, meaning 4 out of every 100 servers experienced a serious failure that would take at least ten days to half a month to repair, or even be scrapped directly.
If servers purchased from outside had such a high failure rate, Huang He would have long ago demanded a refund.
However, if these servers were all produced by Jiangnan Group itself, then it was a different matter.
Indeed, this batch of 300 servers were all products of Jiangnan Group, and they were the second batch produced. The first batch had already been sent to the Jiangnan Network Center in Wenzhou. A total of 600 servers, north and south, provided support for Jiangnan Group's network services.
As the first batch of self-produced products, a product of trial and error, a high failure rate was understandable. It was only in novels that products could achieve astonishing quality immediately upon production.
Any new product, upon initial research and development, would be accompanied by various failures. Only by continuously finding faults during use, accumulating experience, improving production processes, and avoiding areas prone to problems could the quality of the servers be gradually improved.
In fact, the failure rate of the first batch was even higher, exceeding 25%. Therefore, Huang He was already very pleased with this second batch's failure rate, and now it was just a matter of striving for perfection.
At the same time, one point must be noted: although the failure rate was very high, the stability rate was also high.
A stability rate of 97.79% was almost comparable to that of international first-tier brand servers, such as the HP servers that Jiangnan Group had previously purchased.
So the question arises, why were these servers of such poor quality yet had such an astonishing stability rate?
This was because the problem of single-server failure was considered from the initial design of these servers. Therefore, these servers uniformly featured a complete set of interconnected systems. With 300 servers forming a whole, if any single server failed, its tasks would be quickly transferred to other servers.
Simultaneously, because these 300 servers were actually responsible for the workload of about 200 servers, they could continue to operate without any impact even if dozens of servers were disconnected instantaneously.
The reason for the remaining 2% instability this time was due to a large-scale regional power outage, two instances of local network failures, and another instance where a storage server failed, leading to data loss and network fluctuations.
In summary, these computing servers actually maintained a 100% stable operating efficiency.
Of course, one point must be admitted: compared to international first-class server manufacturers, or even second-tier server manufacturers, Jiangnan Group's servers still had a significant gap.
Firstly, for data volumes like Jiangnan Group's, normally, purchasing 50 HP servers would be sufficient. However, Huang He had deployed 600 servers in the northern and southern network centers. Therefore, 200 of these were added for stability, while 400 were actually working.
In other words, one HP server was equivalent to eight Jiangnan servers in efficiency.
However, this was unavoidable. The chips used in Jiangnan servers were mid-range chips designed by themselves. In terms of computing cores, they were two tiers below foreign servers. Coupled with issues with other electronic components and the immaturity of server design, the current level achieved by these Jiangnan servers was the highest that Jiangnan Group could produce. It was still a long way to catch up with international high-end standards.
Of course, Jiangnan Group's servers also had their advantages, namely their significantly lower price.
For example, an HP server at this time cost $1.2 million to purchase, and this was the cheapest among all high-end servers.
But what about the price of Jiangnan servers?
As they were not yet publicly sold, there was no official pricing. However, Huang He estimated the selling price to be 500,000 RMB per unit.
Indeed, for only 500,000 RMB, one could purchase an enterprise-grade server. Converted to US dollars, it was just over $60,000, equivalent to 1/20th of an HP server.
The price of one HP server could buy 20 Jiangnan servers, and the computing power of 20 Jiangnan servers was equivalent to that of 2.5 HP servers. This cost calculation was clear to everyone.
But this was the price Huang He set for external sales. What about the actual production cost?
Excluding the cost of purchasing equipment, the production cost of one Jiangnan server was only around 200,000 RMB. Of this, 100,000 RMB was used to procure various parts that Jiangnan Group could not produce. The remaining 100,000 RMB was almost entirely for procuring chips and labor costs.
However, the chip cost was a closed-loop operation, as Nikon's ten lithography machines were all in place, and all chips used in Jiangnan servers could now be self-sufficient.
For instance, in the second batch of experimental Jiangnan servers, 200 of them used chips produced by Jiangnan Group. Subsequently, Jiangnan Group would completely cease purchasing chips from external suppliers, and the server clusters would all use self-developed and produced chips.
At that time, the production cost of Jiangnan servers could be further reduced, and a profit of over 300,000 RMB could be made from selling each server. This was indeed an astonishing industry.
However, considering the $1.2 million price of HP servers, one could imagine how much profit they were making.
And if Brother Ma could visit this base, most of his doubts would be answered. This was because Jiangnan Group had invested a massive number of servers at a very low cost to maintain Facebook's operation.
If Tengda wanted to maintain a server cluster of the same scale and computing power, it would likely cost around $200 million, and Tengda did not have that much money.
So this was a fundamental difference in strength. Jiangnan and Tengda were no longer companies on the same dimension. Tengda was still just a simple network company, but Jiangnan had evolved into a network operator + network hardware manufacturer. They could produce servers themselves, which was a problem Tengda could not solve.
Furthermore, it was worth mentioning that although Jiangnan Group's computing servers were considered third-rate by international standards, Jiangnan Group's storage servers had already reached international top-tier levels.
This was mainly due to Tong He and his graphene.
The "black crystal" that had caused a stir in the R&D center was merely the second graphene material developed by Tong He with high practical value. The first graphene material with high practical value was actually a storage particle.
Storage particles made from this special material, at the same size, could store 20 times more data than the best storage particles on the international market at that time.
Yes, it was 20 times the data. Therefore, even if the design of other aspects of the storage servers lagged far behind international advanced levels, Jiangnan Group's storage servers could achieve 5 times the performance of the best international storage servers.
Whether it was capacity, read/write speed, or physical size, the gap was more than 5 times. It was roughly equivalent to the level of storage servers around the year 2008 in the future.
Currently, there were about 50 storage servers integrated into the server cluster, capable of storing over a hundred million TB of data. Jiangnan Group would not need to expand its storage servers for at least two years.
At the same time, Jiangnan servers were also planning to make storage servers their first product to be launched into the international market, as their computing servers were currently not presentable.
Returning to the topic, regarding Brother Ma's other question, which was about the network.
One reason was that Jiangnan had established two network centers, north and south, which could jointly share all network speed demands and improve network efficiency.
However, this was not the main reason. The main reason was the design optimization of Facebook itself.
In fact, Facebook only took a little over two months from research to its first version. The subsequent four months were all spent further refining Facebook.
Because during the actual testing, the Jiangnan team discovered that Facebook required an extremely large amount of network consumption. Therefore, after four months of optimization, Facebook stored almost all images in the space decoration part in offline centers.
This way, except for the first time opening the space, every subsequent time the webpage was opened, image data could be directly called from the offline center, eliminating the need to connect to the network to retrieve data.
Secondly, Facebook's meticulously designed various news circles, where recommended space dynamics were directly published on the circle's page. Users could directly read them by clicking on the space, which also omitted a large number of steps for retrieving space data. It only required retrieving a portion of text or image data, naturally saving a great deal of network bandwidth.
In addition, there were dozens of meticulously polished designs that saved traffic in detail. These were all the results of four months of development.
As for Tengda, they had only 15 days in total. Therefore, QQ Zone was basically copied from OO Zone, a sample that was declared dead upon its release and had not undergone any refinement.
Although the code was changed, the fundamental functional designs were identical. Therefore, to view QQ Zone dynamics, one had to first enter QQ Zone, read a large amount of space data from the server, then read the dynamic data, and finally, the content of the space could be presented.
These steps consumed a lot of network bandwidth, naturally leading to it not being fast.
In fact, Brother Ma later discovered these problems, but all these problems required time for refinement, and Brother Ma no longer had the time.