-
Notifications
You must be signed in to change notification settings - Fork 56
Description
Location (Korea, USA, China, India, etc.)
Korea
Describe the bug
I have a question about the KVSSD user library design, not a bug report.
To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
System environment (please complete the following information)
- Firmware version : ETA51KBV_20190809_ENC.bin
- Number of SSDs : 1
- OS & Kernel version [e.g., Ubuntu 16.04 Kernel v4.9.5]: kernel_v4.13.15-041315-ubuntu-16_04
- GCC version [e.g., gcc v5.0.0] : gcc/g++ v7.5.0
- kvbench version if kvbench runs [e.g., v0.6.0]:
- KV API version [e.g., v0.6.0]: Latest
- User driver version : Latest
- Driver [Kernel or user driver or emulator] : kernel driver
Workload
- number of records or data size
- Workload(insert, mixed workload, etc.) [e.g., sequential or random insert, or 50% Read & 50% write]
- key size : 16B
- value size : 4KB
- operation option if available [e.g., sync or async mode] : Both of them
Additional context
I have a question about the KVSSD design, not a bug report. Why does the user library use only one queue pair in the current implementation? In order to utilize scalability or the maximum bandwidth of the device in the NVMe architecture, it is appropriate to use a multi-queue structure. In the case of the kernel driver, it was confirmed that an aio completion thread was created for each core. One question here is why user libraries only use one queue pair. I ask if there are any technical issues or limitations in multi-queue user library support.