You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

158 lines
5.6 KiB

  1. Benchmarks
  2. ======================
  3. Do we really need the benchmark? People always use benchmark to compare systems. But benchmarks are misleading. The resources, e.g., CPU, disk, memory, network, all matter a lot. And with Weed File System, single node vs multiple nodes, benchmarking on one machine vs several multiple machines, all matter a lot.
  4. Here is the steps on how to run benchmark if you really need some numbers.
  5. Unscientific Single machine benchmarking
  6. ##################################################
  7. I start weed servers in one console for simplicity. Better run servers on different consoles.
  8. For more realistic tests, please start them on different machines.
  9. .. code-block:: bash
  10. # prepare directories
  11. mkdir 3 4 5
  12. # start 3 servers
  13. ./weed server -dir=./3 -master.port=9333 -volume.port=8083 &
  14. ./weed volume -dir=./4 -port=8084 &
  15. ./weed volume -dir=./5 -port=8085 &
  16. ./weed benchmark -server=localhost:9333
  17. What does the test do?
  18. #############################
  19. By default, the benchmark command would start writing 1 million files, each having 1KB size, uncompressed. For each file, one request is sent to assign a file key, and a second request is sent to post the file to the volume server. The written file keys are stored in a temp file.
  20. Then the benchmark command would read the list of file keys, randomly read 1 million files. For each volume, the volume id is cached, so there is several request to lookup the volume id, and all the rest requests are to get the file content.
  21. Many options are options are configurable. Please check the help content:
  22. .. code-block:: bash
  23. ./weed benchmark -h
  24. Common Problems
  25. ###############################
  26. The most common problem is "too many open files" error. This is because the test itself starts too many network connections on one single machine. In my local macbook, if I ran "random read" following writing right away, the error happens always. I have to run "weed benchmark -write=false" to run the reading test only. Also, changing the concurrency level to "-c=16" would also help.
  27. My own unscientific single machine results
  28. ###################################################
  29. My Own Results on Mac Book with Solid State Disk, CPU: 1 Intel Core i7 at 2.2GHz.
  30. .. code-block:: bash
  31. Write 1 million 1KB file:
  32. Concurrency Level: 64
  33. Time taken for tests: 182.456 seconds
  34. Complete requests: 1048576
  35. Failed requests: 0
  36. Total transferred: 1073741824 bytes
  37. Requests per second: 5747.01 [#/sec]
  38. Transfer rate: 5747.01 [Kbytes/sec]
  39. Connection Times (ms)
  40. min avg max std
  41. Total: 0.3 10.9 430.9 5.7
  42. Percentage of the requests served within a certain time (ms)
  43. 50% 10.2 ms
  44. 66% 12.0 ms
  45. 75% 12.6 ms
  46. 80% 12.9 ms
  47. 90% 14.0 ms
  48. 95% 14.9 ms
  49. 98% 16.2 ms
  50. 99% 17.3 ms
  51. 100% 430.9 ms
  52. Randomly read 1 million files:
  53. Concurrency Level: 64
  54. Time taken for tests: 80.732 seconds
  55. Complete requests: 1048576
  56. Failed requests: 0
  57. Total transferred: 1073741824 bytes
  58. Requests per second: 12988.37 [#/sec]
  59. Transfer rate: 12988.37 [Kbytes/sec]
  60. Connection Times (ms)
  61. min avg max std
  62. Total: 0.0 4.7 254.3 6.3
  63. Percentage of the requests served within a certain time (ms)
  64. 50% 2.6 ms
  65. 66% 2.9 ms
  66. 75% 3.7 ms
  67. 80% 4.7 ms
  68. 90% 10.3 ms
  69. 95% 16.6 ms
  70. 98% 26.3 ms
  71. 99% 34.8 ms
  72. 100% 254.3 ms
  73. My own replication 001 single machine results
  74. ##############################################
  75. Create benchmark volumes directly
  76. .. code-block:: bash
  77. curl "http://localhost:9333/vol/grow?collection=benchmark&count=3&replication=001&pretty=y"
  78. # Later, after finishing the test, remove the benchmark collection
  79. curl "http://localhost:9333/col/delete?collection=benchmark&pretty=y"
  80. Write 1million 1KB files results:
  81. Concurrency Level: 64
  82. Time taken for tests: 174.949 seconds
  83. Complete requests: 1048576
  84. Failed requests: 0
  85. Total transferred: 1073741824 bytes
  86. Requests per second: 5993.62 [#/sec]
  87. Transfer rate: 5993.62 [Kbytes/sec]
  88. Connection Times (ms)
  89. min avg max std
  90. Total: 0.3 10.4 296.6 4.4
  91. Percentage of the requests served within a certain time (ms)
  92. 50% 9.7 ms
  93. 66% 11.5 ms
  94. 75% 12.1 ms
  95. 80% 12.4 ms
  96. 90% 13.4 ms
  97. 95% 14.3 ms
  98. 98% 15.5 ms
  99. 99% 16.7 ms
  100. 100% 296.6 ms
  101. Randomly read results:
  102. Concurrency Level: 64
  103. Time taken for tests: 53.987 seconds
  104. Complete requests: 1048576
  105. Failed requests: 0
  106. Total transferred: 1073741824 bytes
  107. Requests per second: 19422.81 [#/sec]
  108. Transfer rate: 19422.81 [Kbytes/sec]
  109. Connection Times (ms)
  110. min avg max std
  111. Total: 0.0 3.0 256.9 3.8
  112. Percentage of the requests served within a certain time (ms)
  113. 50% 2.7 ms
  114. 66% 2.9 ms
  115. 75% 3.2 ms
  116. 80% 3.5 ms
  117. 90% 4.4 ms
  118. 95% 5.6 ms
  119. 98% 7.4 ms
  120. 99% 9.4 ms
  121. 100% 256.9 ms
  122. How can the replication 001 writes faster than no replication?
  123. I could not tell. Very likely, the computer was in turbo mode. I can not reproduce it consistently either. Posted the number here just to illustrate that number lies. Don't quote on the exact number, just get an idea of the performance would be good enough.