FromScratch
Section: Execution
Type: logical
Default: false
When this variable is set to true, Octopus will perform a
calculation from the beginning, without looking for restart
information.
DebugLevel
Section: Execution::Debug
Type: integer
This variable decides whether or not to enter debug mode.
If it is greater than 0, different amounts of additional information
are written to standard output and additional assertion checks are performed.
Options:
ExperimentalFeatures
Section: Execution::Debug
Type: logical
Default: no
If true, allows the use of certain parts of the code that are
still under development and are not suitable for production
runs. This should not be used unless you know what you are doing, check
http://www.tddft.org/programs/octopus/experimental_features
for details.
ForceComplex
Section: Execution::Debug
Type: logical
Default: no
Normally Octopus determines automatically the type necessary
for the wavefunctions. When set to yes this variable will
force the use of complex wavefunctions.
Warning: This variable is designed for testing and
benchmarking and normal users need not use it.
MPIDebugHook
Section: Execution::Debug
Type: logical
Default: no
When debugging the code in parallel it is usually difficult to find the origin
of race conditions that appear in MPI communications. This variable introduces
a facility to control separate MPI processes. If set to yes, all nodes will
start up, but will get trapped in an endless loop. In every cycle of the loop
each node is sleeping for one second and is then checking if a file with the
name node_hook.xxx (where xxx denotes the node number) exists. A given node can
only be released from the loop if the corresponding file is created. This allows
to selectively run, e.g., a compute node first followed by the master node. Or, by
reversing the file creation of the node hooks, to run the master first followed
by a compute node.
ReportMemory
Section: Execution::Debug
Type: logical
Default: no
If true, Octopus will print as part of the screen output
information about the memory the code is using. The quantity
reported is an approximation to the size of the heap and
generally it is a lower bound to the actual memory Octopus is
using. By default this variable is set to false.
FlushMessages
Section: Execution::IO
Type: logical
Default: no
In addition to writing to stdout and stderr, the code messages may also be
flushed to messages.stdout and messages.stderr, if this variable is
set to yes.
RestartDir
Section: Execution::IO
Type: string
Default: ''
When Octopus reads restart files, e.g. when running a time-propagation
after a ground-state calculation, these files will be read from
<RestartDir>/. Usually, RestartDir is
TmpDir but in a transport calculation, the output of
a periodic dataset is required to calculate the extended ground state.
RestartWrite
Section: Execution::IO
Type: logical
Default: true
If this variable is set to no, restart information is not
written. The default is yes.
TmpDir
Section: Execution::IO
Type: string
Default: "restart/"
The name of the directory where Octopus stores binary information
such as the wavefunctions.
WorkDir
Section: Execution::IO
Type: string
Default: "."
By default, all files are written and read from the working directory,
i.e. the directory from which the executable was launched. This behavior can
be changed by setting this variable: if you give it a name (other than ".")
the files are written and read in that directory.
stderr
Section: Execution::IO
Type: string
Default: "-"
The standard error by default goes to, well, to standard error. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
stdout
Section: Execution::IO
Type: string
Default: "-"
The standard output by default goes to, well, to standard output. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
DisableOpenCL
Section: Execution::OpenCL
Type: logical
Default: yes
If Octopus was compiled with OpenCL support, it will try to
initialize and use an OpenCL device. By setting this variable
to yes you tell Octopus not to use OpenCL.
OpenCLDevice
Section: Execution::OpenCL
Type: integer
This variable selects the OpenCL device that Octopus will
use. You can specify one of the options below or a numerical
id to select a specific device.
Options:
OpenCLPlatform
Section: Execution::OpenCL
Type: integer
This variable selects the OpenCL platform that Octopus will
use. You can give an explicit platform number or use one of
the options that select a particular vendor
implementation. Platform 0 is used by default.
Options:
MemoryLimit
Section: Execution::Optimization
Type: integer
Default: -1
If positive, Octopus will stop if more memory than MemoryLimit
is requested (in kb). Note that this variable only works when
ProfilingMode = prof_memory(_full).
MeshBlockSize
Section: Execution::Optimization
Type: block
To improve memory-access locality when calculating derivatives,
Octopus arranges mesh points in blocks. This variable
controls the size of this blocks in the different
directions. The default is | 20 | 20 | 100 |. (This variable only
affects the performance of Octopus and not the
results.)
NLOperatorCompactBoundaries
Section: Execution::Optimization
Type: logical
Default: no
(Experimental) When set to yes, for finite systems Octopus will
map boundary points for finite-differences operators to a few
memory locations. This increases performance, however it is
experimental and has not been thoroughly tested.
OperateComplex
Section: Execution::Optimization
Type: integer
This variable selects the subroutine used to apply non-local
operators over the grid for complex functions.
By default the optimized version is used (except in single-precision build).
Options:
OperateDouble
Section: Execution::Optimization
Type: integer
This variable selects the subroutine used to apply non-local
operators over the grid for real functions.
By default the optimized version is used (except in single-precision build).
Options:
OperateOpenCL
Section: Execution::Optimization
Type: integer
Default: map
This variable selects the subroutine used to apply non-local
operators over the grid when OpenCL is used. The default is map.
Options:
ProfilingAllNodes
Section: Execution::Optimization
Type: integer
Default: no
This variable controls whether all nodes print the time
profiling output. If set to no, the default, only the root node
will write the profile. If set to yes, all nodes will print it.
ProfilingMode
Section: Execution::Optimization
Type: integer
Default: no
Use this variable to run Octopus in profiling mode. In this mode
Octopus records the time spent in certain areas of the code and
the number of times this code is executed. These numbers
are written in ./profiling.NNN/profiling.nnn with nnn being the
node number (000 in serial) and NNN the number of processors.
This is mainly for development purposes. Note, however, that
Octopus should be compiled with --disable-debug to do proper
profiling.
Options:
StatesBlockSize
Section: Execution::Optimization
Type: integer
Some routines work over blocks of eigenfunctions, which
generally improves performance at the expense of increased
memory consumption. This variable selects the size of the
blocks to be used. If OpenCl is enabled, the default is 32;
otherwise it is max(4, 2*nthreads).
StatesCLDeviceMemory
Section: Execution::Optimization
Type: float
Default: -512
This variable selects the amount of OpenCL device memory that
will be used by Octopus to store the states.
A positive number smaller than 1 indicates a fraction of the total
device memory. A number larger than one indicates an absolute
amount of memory in megabytes. A negative number indicates an
amount of memory in megabytes that would be substracted from
the total device memory.
StatesOrthogonalization
Section: Execution::Optimization
Type: integer
The full orthogonalization method used by some
eigensolvers. The default is gram_schmidt. With state
parallelization the default is par_gram_schmidt.
Options:
StatesPack
Section: Execution::Optimization
Type: logical
Default: no
(Experimental) When set to yes, states are stored in packed
mode, which improves performance considerably. However this
is not fully implemented and it might give wrong results. The
default is no.
If OpenCL is used and this variable is set to yes, Octopus
will store the wave-functions in device (GPU) memory. If
there is not enough memory to store all the wave-functions,
execution will stop with an error.
MeshPartition
Section: Execution::Parallelization
Type: integer
Decides which algorithm is used to partition the mesh. By default,
graph partitioning is used for 8 or more partitions, and rcb for fewer.
Applies only if MeshPartitionPackage = metis or zoltan.
All methods are available with Zoltan, but only rcb and graph with METIS.
Options:
MeshPartitionFromScratch
Section: Execution::Parallelization
Type: logical
Default: false
If set to no (the default) Octopus will try to use the mesh
partition from a previous run if available.
MeshPartitionGAMaxSteps
Section: Execution::Parallelization
Type: integer
Default: 1000
The number of steps performed for the genetic algorithm used
to optimize the mesh partition. The default is 1000.
MeshPartitionGAPopulation
Section: Execution::Parallelization
Type: integer
Default: 30
The size of the population used for the genetic algorithm used
to optimize the mesh partition. The default is 30.
MeshPartitionPackage
Section: Execution::Parallelization
Type: integer
Default: metis
Decides which library to use to perform the mesh partition. By
default, METIS is used (if available); otherwise, Zoltan is used.
METIS is faster than Zoltan but uses more memory since it is serial.
Options:
MeshPartitionStencil
Section: Execution::Parallelization
Type: integer
Default: star
To partition the mesh, it is necessary to calculate the connection
graph connecting the points. This variable selects which stencil
is used to do this. The default is the order-one star stencil.
Alternatively, the stencil used for the Laplacian may be used.
Options:
MeshUseTopology
Section: Execution::Parallelization
Type: logical
Default: false
(experimental) If enabled, Octopus will use an MPI virtual
topology to map the processors. This can improve performance
for certain interconnection systems.
ParallelizationGroupRanks
Section: Execution::Parallelization
Type: block
Specifies the size of the groups used for the
parallelization, as one number each for domain, states, k-points, and other.
For example (n_d, n_s, n_k, n_o) means we have
n_d*n_s*n_k*n_o processors and that electron-hole pairs (only for CalculationMode = casida)
will be divided into n_o groups, the k-points should be
divided into n_k groups, the states into n_s groups, and the grid
points into n_d domains. You can pass the value fill to one
field: it will be replaced by the value required to complete
the number of processors in the run. Any value for the column corresponding to
a parallelization strategy unavailable for the current CalculationMode will be ignored.
Options:
ParallelizationNumberSlaves
Section: Execution::Parallelization
Type: integer
Slaves are nodes used for task parallelization. The number of
such nodes is given by this variable multiplied by the number
of domains used in domain parallelization. The default is 0.
ParallelizationOfDerivatives
Section: Execution::Parallelization
Type: integer
Default: non_blocking
This option selects how the communication of mesh boundaries is performed.
Options:
ParallelizationPoissonAllNodes
Section: Execution::Parallelization
Type: logical
Default: true
When running in parallel, this variable selects whether the
Poisson solver should divide the work among all nodes or only
among the parallelization-in-domains groups.
ParallelizationStrategy
Section: Execution::Parallelization
Type: flag
Specifies what kind of parallelization strategy Octopus should use.
The values can be combined: for example, par_domains + par_states
means a combined parallelization in domains and states.
Default: par_domains + par_states for CalculationMode = td,
otherwise par_domains.
Options:
PartitionPrint
Section: Execution::Parallelization
Type: logical
Default: true
(experimental) If disabled, Octopus will not compute
nor print the partition information, such as local points,
no. of neighbours, ghost points and boundary points.
SymmetriesCompute
Section: Execution::Symmetries
Type: logical
Default: (natoms < 100) ? true : false
If disabled, Octopus will not compute
nor print the symmetries.
Units
Section: Execution::Units
Type: integer
Default: atomic
This variable selects the units that Octopus use for
input and output.
Atomic units seem to be the preferred system in the atomic and
molecular physics community. Internally, the code works in
atomic units. However, for input or output, some people like
to use a system based on eV for energies and
for length. The default is atomic units.
Normally time units are derived from energy and length units,
so it is measured in or
. Alternatively you can tell
Octopus to use femtoseconds as the time unit by adding the
value femtoseconds (Note that no other unit will be
based on femtoseconds). So for example you can use:
Units = femtoseconds
or
Units = ev_angstrom + femtoseconds
You can use different unit systems for input and output by
setting the UnitsInput and UnitsOutput.
Warning 1: All files read on input will also be treated using
these units, including XYZ geometry files.
Warning 2: Some values are treated in their most common units,
for example atomic masses (a.m.u.), electron effective masses
(electron mass), vibrational frequencies
(cm-1) or temperatures (Kelvin). The unit of charge is always
the electronic charge e.
Options:
UnitsInput
Section: Execution::Units
Type: integer
Default: atomic
Same as Units, but only refers to input values.
UnitsOutput
Section: Execution::Units
Type: integer
Default: atomic
Same as Units, but only refers to output values.