Read Data With AcuFieldView Parallel

Similar to running client-server, you will need to configure files that will permit you to run in the AcuFieldView Parallel Server programs.

A simple example of an AcuFieldView Parallel Server Configuration file to run the shared memory mpi version of AcuFieldView Parallel on a shared memory system is given below:
AutoStart: true
ServerType: shmem
ServerName: my_shmem_system
NumProcs: 9
StartDirectory: /usr2/data/test_PAR

This example sets the AutoStart parameter to launch the AcuFieldView Parallel server program(s) automatically. As with client server, you can also configure this to run manually. If you do, you will need to execute the fvrunsrv command locally.

A total of nine processes has been chosen. This has been set with the command line argument -np 9 for the fvrunsrv command. This means that there will be one controller process, which acts solely as a dispatcher, and eight worker processes, which read data, started on this shared memory system named my_shmem_system. The fvrunsrv command also starts the AcuFieldView Parallel server program, fvsrv_shmem.

A start directory, /usr2/data/test_PAR, is set so that when the file browser to read data comes up, it will be started at this location.

A simple example of an AcuFieldView Parallel Server Configuration file to run the p4 mpi version of AcuFieldView Parallel on a system such as a Linux cluster might look like:
AutoStart: true
ServerType: cluster
ServerName: my_cluster_system
NumProcs: 5
StartDirectory: /usr2/data/test_PAR

In this case, again the AutoStart option has been chosen.

For a parallel cluster, you will need to have the MPICH files and the FV Parallel Server programs installed on the controller node of the system, in this case called my_cluster_system. The installation location for this set of files can be anywhere you choose. When this AcuFieldView Parallel Server is selected, a total of five processes will be run. Again, one of these will be a controller process, and the remaining four will be used to read and process the dataset.

For the cluster option, MPI will use the default "machine" file openmpi-default-hostfile, found within the MPI installation with AcuFieldView. If you do not want to let MPI use defaults and would rather specify which nodes of your cluster will be used as AcuFieldView "worker" servers, you can:
  • Specify a custom machine file with the fvrunsrv-machinefile option
  • Provide a list of machines using the fvrunsrv-hosts option
  • Use the MachineFile: field to specify a custom machine file in the Server Configuration (.srv) file

Note also that all nodes of a cluster must be able to resolve the path used for the field ServerDirectory: in the Server Configuration file (.srv) in order to find the program fvsrv_par.

To read data, from the File menu, click Data Input > Choose Server. From this list, select the desired AcuFieldView Parallel Server program.

If both examples for the server configuration files are placed in the <AcuSolve installation directory>/fv/sconfig directory, then when the Choose Server option is selected from the Data Input menu, both the p4-based and shared memory based AcuFieldView Parallel options to read data will be present, as illustrated above.

After this point, you can now read the data via a browser which starts on the Controller server system.

If you want to quickly determine whether a dataset has multiple grids there are a few simple observations that can be made by attempting to read the data in the Direct Mode of operation, not Parallel or Client-Server. First, if a file has multiple grids specified, and the read is successful, then you can determine how many grids are available by reviewing the Grid Subset Selection panel. Also, once the dataset has been read in successfully, you can see the outline of each of the individual grids in the graphics window. For the case of unstructured data, the number of grids for a given dataset, as well as the number of nodes and elements for each grid will be listed in the console window. There will be one line in the console window corresponding to each grid. A typical console window output might contain the following lines:

Unstructured grid 1 has 81830 nodes and 12614 elements.

Unstructured grid 2 has 68350 nodes and 12191 elements.

Unstructured grid 3 has 61992 nodes and 12449 elements.

Unstructured grid 4 has 55450 nodes and 11502 elements.

Unstructured grid 5 has 79576 nodes and 12545 elements.

Unstructured grid 6 has 81502 nodes and 11656 elements.

Unstructured grid 7 has 82392 nodes and 12786 elements.

Unstructured grid 8 has 74937 nodes and 12813 elements.

Unstructured grid 9 has 51221 nodes and 9562 elements.

Unstructured grid 10 has 54713 nodes and 9574 elements.

Unstructured grid 11 has 49797 nodes and 9172 elements.

Unstructured grid 12 has 45543 nodes and 8938 elements.

Unstructured grid 13 has 48358 nodes and 9769 elements.

Unstructured grid 14 has 48633 nodes and 9696 elements.

Unstructured grid 15 has 52293 nodes and 9633 elements.

Unstructured grid 16 has 43283 nodes and 8838 elements.

At the present time, there is no feature in AcuFieldView, or standalone utility capable of producing a multi grid file from a single grid dataset.

At the present time, there is no feature in AcuFieldView, or standalone utility capable of creating partitioned files from a multi grid dataset.