Basic Usage

Sample configuration files and geometry input files are provided with the binary to test the proper system setting and see if the simulation starts correctly on the system.

Single and Double Precision Binaries

Since v1.4, nanoFluidX has being delivered in both single (SP) and double (DP) precision. It has been noted that for some very specific applications/simulations the single precision is insufficient and double precision should be used. It is difficult to universally predict in which cases there is a need for double precision; therefore, single precision binary contains a warning message with some useful input about possible precision issues. It is up to you to acknowledge the warning message and decide whether it is necessary to use double precision or not.

As an example of such a situation, you can imagine a case where there are large discrepancies in scales for both velocity and positions, for example, max velocity of 30 m/s and the need to accurately resolve movement with min velocity of 0.0003 m/s.

Single precision is found to be of satisfactory accuracy for most realistic cases.

Performance of single versus double precision binaries varies greatly depending on the type of GPU, but overall, it is realistic to expect at least two times slower run times when using double precision. Latest generation of NVIDIA Tesla GPU’s (the V100) significantly improved both single and double precision performance.

NVIDIA SMI

Similar to the command top to see the currently running jobs on the CPU, NVIDIA provides the tool nvidia-smi that shows the actual usage of the GPUs. Also, the currently occupied GPU’s can be detected if already some jobs are running on that machine.

Environment setup

After the installation of nanoFluidX it is neccessary to set up the system variable environment. You can source the environment script that is provided with the installation and located in the installation folder.

When sourcing the set_nFX_environment.sh, use the source command. Do not execute using sh.

Running the set_nFX_environment.sh script is necessary every time your session is interrupted, either by a restart, shut down, or some other case which results in resetting the environment variables.

Alternatively, you can copy and paste the content of the set_nFX_environment.sh script to your .bashrc file and source the .bashrc file. This mitigates the need for explicit sourcing of the environment script for every active session.

When you carry out the ldd command on the nanoFluidX binary, all libraries regarding CUDA, OpenMPI, and Altair licensing should point to the respective subfolders in the ./libs folder inside of the root nanoFluidX folder.

If you are not using the bash shell, but rather csh or tcsh, there is a set_nFX_environment.csh script included in the package that you can use in the same manner as the .sh script.

Launching and stopping a Simulation

The command to start a simulation with single precision binary is:
$ CUDA_VISIBLE_DEVICES=0 mpiexec –np 1 $nFX_SP –i test.cfg
The first command CUDA_VISIBLE_DEVICES chooses the GPU-devices that are used to start this job. By usage of this keyword, it is possible to avoid running different jobs on the same card. Invoking a simulation without this command will always try to run the job on device number zero. For multiple cards just list the device numbers separated with a comma and set the number of processes to -np #N. Ensure the number of processes -np matches the number of selected GPU's. For example, if you want to run on 3 GPU's, the command should look like:
$ CUDA_VISIBLE_DEVICES=0,1,2 mpiexec –np 3 $nFX_SP –i test.cfg
Note: The CPU is used for nanoFluidX to control the simulation, the actual work is done on the GPU.

To launch a simulation, a configuration file has to be passed to the binary via the option -i, followed by the configuration file name. This file and the corresponding geometry file need to be placed in the same folder where the simulation is started. If you would like to run a double precision binary, the $nFX_SP should be replaced with $nFX_DP.

nanoFluidX is a multi-GPU-capable CFD code which implies that a simulation has to be started using the mpiexec -np N command, where N is the number of GPUs used. This is the case even for a single-GPU run since the general implementation expects a parallel environment. The CPU-GPU pinning is one-to-one, where each mpi-rank owns one GPU.
Important: nanoFluidX treats all problems as three-dimensional. Consequently, two-dimensional problems are set up simply by using a zero or constant third vector component in all directional quantities.

To appropriately stop a simulation, the user can create an empty file titled SAVEABORTFILE in the root folder of the running case. nanoFluidX will pick up on the newly created file in the folder, stop the simulation and write down the flow fields at the given time step and a restart file.

Try to simulate one of the example cases provided with the binary.