[visit-users] instrumenting a simulation code. v 1.12.2

Brad Whitlock whitlock2 at llnl.gov
Tue Mar 9 11:52:46 EST 2010


Jean,
    Have you implemented the GetDomainList callback function in your 
simulation code? This callback tells VisIt's load balancer how many 
domains there are and which processors own them. The typical use case 
is that you create a domain list with a total number of domains set 
to the number of processors and then each processor owns only the 
domain corresponding to its processor rank. After supplying the 
domain list, GetMesh will be called only once per processor and each 
processor returns only its local mesh. See 
src/tools/DataManualExamples/Simulations/updateplots.c for an example 
simulation that works in parallel. I'm pretty sure it worked versus 
1.12.2. I know it works with the trunk.

There is no analogue to "FormatCanDoDomainDecomposition" in 
simulations since simulations are already domain-decomposed. That 
setting is for large single-domain files that need to be parallelized 
on the fly when running parallel VisIt.

Brad

>Dear List
>
>I am instrumenting a parallel code, which handles one single 
>(distributed) mesh. Following all the steps of the manuals. I got it 
>to the point where I can connect and visualize my simulation mesh 
>iif I do an MPI run of size 1 (i.e serial).
>When truly parallel, VisIt hangs when I ask to display the mesh. The 
>status bar is left at 15%.
>
>After reducing my code to a bare skeleton, I am now faced with 
>errors like "coordinates arrays must be double or float", whereas I 
>am sure I am passing float arrays via the 
>CreateDataArray_FromFloat() calls.
>
>At this point, I am not sure what grid resolution and X, Y and Z 
>coords arrays should be handed out by the MPI ranks when doing 
>VisitGetMesh().
>
>if I follow the same model as a parallel reader (which I have 
>already built successfully), my reader advertizes one single large 
>mesh and I set
>
>md->SetFormatCanDoDomainDecomposition(true)
>mesh->numBlocks = 1
>
>and each MPI rank constructs its own local sub-piece. Each piece is 
>a smaller vtkRectilinearGrid.
>
>So, what about instrumenting the parallel code directly? How do I 
>say the equivalent of CanDoDomainDecomposition(true)? or is it 
>implicit?
>  and should each MPI rank advertize its own local grid resolution 
>for data and coordinate arrays?
>
>
>-----------------
>Jean M. Favre
>Swiss National Supercomputing Center
>--
>List subscription information: 
>https://*email.ornl.gov/mailman/listinfo/visit-users
>Searchable list archives: https://*email.ornl.gov/pipermail/visit-users
>VisIt Users Wiki: http://*visitusers.org/
>Frequently Asked Questions for VisIt: http://*visit.llnl.gov/FAQ.html


-- 
======================================================================
Brad Whitlock                   Lawrence Livermore National Laboratory
whitlock2 at llnl.gov
(925)424-2614
======================================================================


More information about the visit-users mailing list