Welcome to grpc4bmi’s documentation!¶
Introduction¶
The Basic Modeling Interface (BMI, see https://github.com/csdms/bmi) is a multi-language library interface tailored to earth system models. This software allows you to wrap a BMI implementation in a server process and communicate with it via the included python client. The communication is serialized to protocol buffers by GRPC (https://grpc.io/) and occurs over network ports. On the server side, we support BMI implementations in python, R or C/C++. Fortran models need to be linked against the C-version of the BMI. On the client side, we expose the standard python BMI (https://github.com/csdms/bmi-python/).
This setup enables you to wrap your BMI-enabled model in a Docker (https://www.docker.com/) or Singularity (https://singularity.lbl.gov/) container and communicate with it from a python process on the host machine.
Installing requirements¶
To use grpc4bmi the Python package should be installed.
If your model uses some virtual environment with installed dependencies (e.g. Anaconda or virtualenv), activate this environment before installing grpc4bmi.
Install the grpc4bmi python package with pip:
$ pip install grpc4bmi
This will install the latest release of the package; for the most recent github revision type instead
$ pip install git+https://github.com/eWaterCycle/grpc4bmi.git#egg=grpc4bmi
Creating a BMI server¶
Python¶
If you have a BMI-compliant model written in python, grpc4bmi provides a quick way to set up a BMI service.
Creating¶
To obtain a python BMI for your model, install the Python bmi package (bmipy) and implement the bmipy.Bmi
abstract base class for your model. For exposing this model as a GRPC service, it is necessary to have a constructor without arguments: all initialization state will be presented to the model via the configuration file in the initialize
method.
Running¶
The installation of the grpc4bmi package installs the run-bmi-server
command. You can run your model as a service by typing
$ run-bmi-server --name <PACKAGE>.<MODULE>.<CLASS>
where <PACKAGE>
, <MODULE>
are the python package and module containing your python BMI model, which should contain a python class <CLASS>
that implements Bmi. The script assumes that this class does not take any constructor arguments. Upon running, the server will report which networking port it has decided to use on the terminal. This port will later be needed by BMI clients to communicate with your service.
The port can also be specified by adding the option --port <PORT>
or pre-define the environment variable BMI_PORT
(the latter takes precedence over the former).
An extra system path can be specified by adding the option --path <PATH>
or pre-define the environment variable BMI_PATH
.
Example¶
As an example, suppose we have a package
$ mypackage
$ - __init__.py
$ - mymodule.py
and inside the mymodule.py
the bmi implementation
from bmi import Bmi
class MyBmi(Bmi):
def __init__(self):
...
def get_component_name(self):
return "Hello world"
Then we launch this toy model as a service by executing
$ run-bmi-server --name mypackage.mymodule.MyBmi
This will report the chosen port number in the standard output stream. It can be used to connect to the service via the BMI grpc python client.
R¶
Grpc4bmi allows you to wrap a Hydrological model written in the R language into a GRPC server.
Installing Requirements¶
The bmi-r package can be installed using the following devtools command
devtools::install_github("eWaterCycle/bmi-r")
Creating¶
A model must implement the basic model interface (bmi).
This can be done by sub-classing the AbstractBmi class found in the bmi-r R package.
A model (in the example called mymodel) can than be given a basic model interface with something like
library(R6)
library(bmi)
library(mymodel)
MyModelBmi <- R6Class(
inherit = AbstractBmi,
public = list(
getComponentName = function() return('mymodel'),
bmi_initialize = function(config_file) {
# TODO Construct & initialize mymodel model
},
update = function() {
# TODO evolve mymodel model to next time step
},
# TODO implement all other bmi functions
)
)
For an example of a BMI interface of the Wageningen Lowland Runoff Simulator (WALRUS) see walrus-bmi.r
Running¶
Once the model has an BMI interface it can be run as a GRPC server by installing the grpc4bmi[R] Python package with
pip install grpc4bmi[R]
The server can be started with
run-bmi-server --lang R [--path <R file with BMI model>] --name [<PACKAGE>::]<CLASS> --port <PORT>
For the WALRUS model the command is
run-bmi-server --lang R --path ~/git/eWaterCycle/grpc4bmi-examples/walrus/walrus-bmi.r --name WalrusBmi --port 55555
The Python grpc4bmi Using the client can then be used to connect to the server.
Note that the --port
and --path
arguments also can be specified as the respective environment variables BMI_PORT
and BMI_PATH
.
C/C++/Fortran¶
Installing Requirements¶
For native programming languages it is necessary to install and compile the C++ bindings of GRPC and protobuf on your system:
git clone -b $(curl -L https://grpc.io/release) --depth=1 https://github.com/grpc/grpc
cd grpc
git submodule update --init --recursive
wget -q -O cmake-linux.sh https://github.com/Kitware/CMake/releases/download/v3.16.5/cmake-3.16.5-Linux-x86_64.sh
sudo sh cmake-linux.sh -- --skip-license --prefix=/usr/local
rm cmake-linux.sh
mkdir cmake/build && cd cmake/build
/usr/local/bin/cmake ../.. -DgRPC_INSTALL=ON -DgRPC_SSL_PROVIDER=package -DgRPC_BUILD_TESTS=OFF -DBUILD_SHARED_LIBS=ON
sudo make -j4 install
sudo ldconfig
You will also need to compile grpc4bmi
git clone --depth=1 https://github.com/eWaterCycle/grpc4bmi.git
cd grpc4bmi && git submodule update --init
cd cpp
mkdir -p build && cd build && cmake .. && sudo make install
Creating¶
The grpc4bmi package comes with a C++ abstract base class that contains the BMI functions. The header file will
be copied to your system include path upon the installation steps above. Write an implementation of the Bmi
class using your model time step code and data structures. You don’t have to worry about global variables in your model code: with grpc4bmi every model instance runs in its own memory space. For the same reason, the get_value_ptr
and set_value_ptr
methods can be safely ignored, they are never called through the grpc process bridge.
Running¶
Since native language lack reflection, it is necessary to make your own run_bmi_server
program. We provide a function run_bmi_server(Bmi*, int*, char*)
in the bmi_grpc_server.h
header that can be called with your model instance (see the example below). To compile your server binary, it is necessary to link against grpc4bmi and protobuf libraries.
The program will accept a single optional argument which is the port the server will run on. The port can also be specified using the BMI_PORT environment variable. The default port is 50051.
Example¶
To create a BMI to your model, write a header file in which you declare the overridden functions of the base class Bmi
in the included file bmi_class.h
.
my_bmi_model.h:
#include <bmi_class.h>
class MyBmiModel: public Bmi
{
public:
MyBmiModel();
int initialize(const char* config_file) override;
...
int get_component_name(char* name) const override;
};
Write your implementation of the basic modeling interface in the corresponding source file
my_bmi_model.cc:
#include <my_bmi_model.h>
#include <cstring>
MyBmiModel::MyBmiModel(){}
int MyBmiModel::initialize(const char* config_file)
{
/* ...initialize the model from config_file... */
return BMI_SUCCESS;
}
...
int MyBmiModel::get_component_name(char* name) const
{
strcpy(name, "Hello world");
return BMI_SUCCESS;
}
Now the BMI server can be simply be implemented as
run_my_bmi_model.cc:
#include "bmi_grpc_server.h"
#include "my_bmi_model.h"
int main(int argc, char* argv[])
{
Bmi* model = new HypeBmi();
run_bmi_server(model, argc, argv);
delete model;
return 0;
}
This binary will need to be linked against grpc4bmi and the protobuf libraries:
g++ -o my_bmi_server run_my_bmi_model.o my_bmi_model.o `pkg-config --libs protobuf grpc++ grpc` -Wl,--no-as-needed -lgrpc++_reflection -ldl -lgrpc4bmi
Fortran¶
In case you have a Fortran model, we advice to write the corresponding functions in Fortran first and export them to the implementation, e.g.
my_bmi_model.f90:
subroutine get_component_name(name) bind(c, name="get_component_name_f")
use, intrinsic ::iso_c_binding
implicit none
character(kind=c_char), intent(out) :: name(*)
name(1:11)="Hello world"
name(12)=c_null_char
Now it is possible to call this function from the BMI C implementation as follows,
my_bmi_model.cc:
extern "C" void get_component_name_f(char*)
int MyBmiModel::get_component_name(char* name) const
{
get_component_name_f(name);
return BMI_SUCCESS;
}
Using the client¶
We assume that service is always dedicated to a single client, addressing a BMI model with multiple users at the same time results in undefined behavior.
Python BMI Client¶
For a given running BMI service process connected to networking port <PORT>
, we can start communicating with this server by instantiating the grpc4bmi.bmi_grpc_client.BmiClient
python class:
import grpc
from grpc4bmi.bmi_grpc_client import BmiClient
mymodel = BmiClient(grpc.insecure_channel("localhost:<PORT>"))
For the example model launched in Example, the component name can be retrieved following the usual BMI syntax,
print(mymodel.get_component_name())
Hello world
Python Subprocess¶
This python class launches a BMI server upon creation,
from grpc4bmi.bmi_client_subproc import BmiClientSubProcess
model = BmiClientSubProcess(<PACKAGE>.<MODULE>.<CLASS>)
The code above will execute run-bmi-server
in a python subprocess and automatically listen to the appropriate port. Note that this requires your client to run in the same python environment as your model.
Running Python server explains the roles of <PACKAGE>
, <MODULE>
and <CLASS>
.
Polyglot CLI¶
Once you have started a GRPC server you can test it by connecting to it using the Polyglot - a universal grpc command line client.
Polyglot requires Java and the polglot.yar file can be downloaded at https://github.com/dinowernli/polyglot/releases
The following commands expects a GRPC server running on localhost on port 55555.
To get the component name use
echo '{}' | java -jar polyglot.jar call --endpoint=localhost:55555 --full_method=bmi.BmiService/getComponentName
Building a docker image¶
The biggest advantage of using grpc4bmi is that you can embed the model code in a container like a Docker image. The grpc bridge allows you to address it from the host machine with the python BMI.
To establish this, install your BMI model and grpc4bmi inside the container, and let run-bmi-server
act as the entry point of the docker image.
Python¶
The docker file for the model container simply contains the installation instructions of grpc4bmi and the BMI-enabled model itself, and as entrypoint the run-bmi-server
command. For the python example the Docker file will read
FROM ubuntu:bionic
MAINTAINER your name <your email address>
# Install grpc4bmi
RUN pip install git+https://github.com/eWaterCycle/grpc4bmi.git#egg=grpc4bmi
# Install here your BMI model:
RUN git clone <MODEL-URL> /opt/mymodeldir
# Run bmi server
ENTRYPOINT ["run-bmi-server", "--name", "mypackage.mymodule.MyBmi", "--path", "/opt/mymodeldir"]
# Expose the magic grpc4bmi port
EXPOSE 55555
The port 55555 is the internal port in the Docker container that the model communicates over. It is the default port for run_bmi_server
and also the default port that all clients listen to.
R¶
The Docker image can be made by writing a Dockerfile file like
FROM r-base
LABEL maintainer="Your name <your email address>"
RUN apt update && apt install -t unstable -y python3-dev python3-pip git && \
pip3 install git+https://github.com/eWaterCycle/grpc4bmi.git#egg=grpc4bmi[R]
RUN install.r remotes && installGithub.r eWaterCycle/bmi-r
RUN install.r <R mymodel library from CRAN>
# Copy BMI interface of model into Docker image
RUN mkdir /opt/
COPY mymodel-bmi.r /opt/
# Config file and forcing file will be mounted at /data
RUN mkdir /data
WORKDIR /data
VOLUME /data
ENV BMI_PORT=55555
CMD ["run-bmi-server", "--lang", "R", "--path", "/opt/mymodel-bmi.r", "--name", "mymodel"]
EXPOSE 55555
The WALRUS model has a Dockerfile file which can be used as an example.
C/C++/Fortran¶
For native languages you need to compile you BMI model inside the container let your bmi server runner binary program act as the entry point. The protobuf, grpc and grpc4bmi libraries need to be installed in your docker image, which means that the installation instructions must be adopted in your Docker file. Then, include the installation of the model itself and the bmi run binary that you have written (as described here). Finally the entry point in the docker file should be the launch of this binary and you should expose port 55555. For the C++ example C++ example
# ...download, compile and install grpc and grpc4bmi...
# ...download, compile and install my_bmi_model...
# Run bmi server
ENTRYPOINT ["my_bmi_server"]
# Expose the magic grpc4bmi port
EXPOSE 55555
Building and Publishing¶
The Docker image can be build with
docker build -t <image name> .
The Docker image can be published at Docker Hub by creating a repository and pushing it with
docker push <image name>
The example WALRUS model is published at https://cloud.docker.com/u/ewatercycle/repository/docker/ewatercycle/walrus-grpc4bmi.
The Docker image can then be started with the grpc4bmi docker client.
Using the container clients¶
Docker¶
Grpc4bmi can run containers with Docker engine.
Use the grpc4bmi.bmi_client_docker.BmiClientDocker
class to start a Docker container and get a client to interact with the model running inside the container.
For example the PCR-GLOBWB model can be started in a Docker container with
model = BmiClientDocker(image='ewatercycle/pcrg-grpc4bmi:latest', image_port=55555,
input_dir="./input",
output_dir="./output")
# Interact with model
model.initialize('config.cfg')
# Stop container
del model
Singularity¶
Grpc4bmi can run containers on Singularity.
The Docker images build previously can be either run directly or converted to singularity image file and run.
To run a Docker image directly use docker://<docker image name> as singularity image name.
To convert a Docker image to a singularity image file use
singularity build docker://<docker image name> <singularity image filename>
Use the grpc4bmi.bmi_client_singularity.BmiClientSingularity
class to start a Singularity container and get a client to interact with the model running inside the container.
from grpc4bmi.bmi_client_singularity import BmiClientSingularity
image = '<docker image name of grpc4bmi server of a bmi model>'
client = BmiClientSingularity(image, input_dir='<directory with models input data files>')
For example for the wflow Docker image the commands would be the following
from grpc4bmi.bmi_client_singularity import BmiClientSingularity
image = 'docker://ewatercycle/wflow-grpc4bmi:latest'
client = BmiClientSingularity(image, input_dir='wflow_rhine_sbm', output_dir='wflow_output')
Command line tools¶
run-bmi-server¶
BMI GRPC server runner
usage: run-bmi-server [-h] [--name PACKAGE.MODULE.CLASS] [--port N]
[--path DIR] [--language {python}]
[--bmi-version {1.0.0,0.2}] [--debug]
Named Arguments¶
--name, -n | Full name of the BMI implementation class. The module should be in your search path and the class should have a constructor with no arguments |
--port, -p | Network port for the GRPC server and client. If 0, let the OS choose an available port. If the BMI_PORT environment variable is specified, it will take precedence over this argument Default: 0 |
--path, -d | Extra path name to append to the server instance process |
--language | Possible choices: python Language in which BMI implementation class is written Default: “python” |
--bmi-version | Possible choices: 1.0.0, 0.2 Version of BMI interface implemented by model Default: “1.0.0” |
--debug | Run server in debug mode. Logs errors with stacktraces and returns stacktrace in error response Default: False |
Python API¶
grpc4bmi package¶
Submodules¶
grpc4bmi.bmi_client_docker module¶
-
class
grpc4bmi.bmi_client_docker.
BmiClientDocker
(image, image_port=55555, host=None, input_dir=None, output_dir=None, user=1005, remove=False, delay=5, timeout=None, extra_volumes=None)[source]¶ Bases:
grpc4bmi.bmi_grpc_client.BmiClient
BMI gRPC client for dockerized server processes: the initialization launches the docker container which should have the run-bmi-server as its command. Also, it should expose the tcp port 55555 for communication with this client. Upon destruction, this class terminates the corresponding docker server.
Parameters: - image (str) – Docker image name of grpc4bmi wrapped model
- image_port (int) – Port of server inside the image
- host (str) – Host on which the image port is published on a random port
- input_dir (str) – Directory for input files of model
- output_dir (str) – Directory for input files of model
- user (str) – Username or UID of Docker container
- remove (bool) – Automatically remove the container and logs when it exits.
- delay (int) – Seconds to wait for Docker container to startup, before connecting to it
- timeout (int) – Seconds to wait for gRPC client to connect to server
- extra_volumes (Dict[str,Dict]) –
Extra volumes to attach to Docker container. The key is either the hosts path or a volume name and the value is a dictionary with the keys:
bind
The path to mount the volume inside the containermode
Eitherrw
to mount the volume read/write, orro
to mount it read-only.
For example:
{'/data/shared/forcings/': {'bind': '/forcings', 'mode': 'ro'}}
-
get_value_ptr
(var_name)[source]¶ Not possible, unable give reference to data structure in another process and possibly another machine
-
initialize
(filename)[source]¶ Perform startup tasks for the model.
Perform all tasks that take place before entering the model’s time loop, including opening files and initializing the model state. Model inputs are read from a text-based configuration file, specified by filename.
Parameters: config_file (str, optional) – The path to the model configuration file. Notes
Models should be refactored, if necessary, to use a configuration file. CSDMS does not impose any constraint on how configuration files are formatted, although YAML is recommended. A template of a model’s configuration file with placeholder values is used by the BMI.
-
input_mount_point
= '/data/input'¶
-
output_mount_point
= '/data/output'¶
-
exception
grpc4bmi.bmi_client_docker.
DeadDockerContainerException
(message, exitcode, logs, *args)[source]¶ Bases:
ChildProcessError
Exception for when a Docker container has died.
Parameters: -
exitcode
= None¶ Exit code of container
-
logs
= None¶ Stdout and stderr of container
-
grpc4bmi.bmi_client_singularity module¶
-
class
grpc4bmi.bmi_client_singularity.
BmiClientSingularity
(image, input_dir=None, output_dir=None, timeout=None, extra_volumes=None)[source]¶ Bases:
grpc4bmi.bmi_grpc_client.BmiClient
BMI GRPC client for singularity server processes During initialization launches a singularity container with run-bmi-server as its command. The client picks a random port and expects the container to run the server on that port. The port is passed to the container using the BMI_PORT environment variable.
>>> from grpc4bmi.bmi_client_singularity import BmiClientSingularity >>> image = 'docker://ewatercycle/wflow-grpc4bmi:latest' >>> client = BmiClientSingularity(image, input_dir='wflow_rhine_sbm', output_dir='wflow_output') >>> client.initialize('wflow_rhine_sbm/wflow_sbm.ini') >>> client.update_until(client.get_end_time()) >>> del client
Parameters: - image – Singularity image. For Docker Hub image use docker://*.
- input_dir (str) – Directory for input files of model
- output_dir (str) – Directory for input files of model
- timeout (int) – Seconds to wait for gRPC client to connect to server
- extra_volumes (Dict[str,str]) –
Extra volumes to attach to Singularity container.
The key is the hosts path and the value the mounted volume inside the container. Contrary to Docker client, extra volumes are always read/write
For example:
{'/data/shared/forcings/': '/data/forcings'}
-
INPUT_MOUNT_POINT
= '/data/input'¶
-
OUTPUT_MOUNT_POINT
= '/data/output'¶
-
get_value_ptr
(var_name)[source]¶ Not possible, unable give reference to data structure in another process and possibly another machine
-
initialize
(filename)[source]¶ Perform startup tasks for the model.
Perform all tasks that take place before entering the model’s time loop, including opening files and initializing the model state. Model inputs are read from a text-based configuration file, specified by filename.
Parameters: config_file (str, optional) – The path to the model configuration file. Notes
Models should be refactored, if necessary, to use a configuration file. CSDMS does not impose any constraint on how configuration files are formatted, although YAML is recommended. A template of a model’s configuration file with placeholder values is used by the BMI.
grpc4bmi.bmi_client_subproc module¶
-
class
grpc4bmi.bmi_client_subproc.
BmiClientSubProcess
(module_name, path=None, timeout=None)[source]¶ Bases:
grpc4bmi.bmi_grpc_client.BmiClient
BMI GRPC client that owns its server process, i.e. initiates and destroys the BMI server upon its own construction or respective destruction. The server is a forked subprocess running the run_server command.
>>> from grpc4bmi.bmi_client_subproc import BmiClientSubProcess >>> mymodel = BmiClientSubProcess(<PACKAGE>.<MODULE>.<CLASS>)
grpc4bmi.bmi_grpc_client module¶
-
class
grpc4bmi.bmi_grpc_client.
BmiClient
(channel=None, timeout=None, stub=None)[source]¶ Bases:
bmipy.bmi.Bmi
Client BMI interface, implementing BMI by forwarding every function call via GRPC to the server connected to the same port. A GRPC channel can be passed to the constructor; if not, it constructs an insecure channel on a free port itself. The timeout parameter indicates the model BMI startup timeout parameter (s).
>>> import grpc >>> from grpc4bmi.bmi_grpc_client import BmiClient >>> mymodel = BmiClient(grpc.insecure_channel("localhost:<PORT>")) >>> print(mymodel.get_component_name()) Hello world
-
finalize
()[source]¶ Perform tear-down tasks for the model.
Perform all tasks that take place after exiting the model’s time loop. This typically includes deallocating memory, closing files and printing reports.
-
get_component_name
()[source]¶ Name of the component.
Returns: The name of the component. Return type: str
-
get_current_time
()[source]¶ Current time of the model.
Returns: The current model time. Return type: float
-
get_grid_edge_count
(grid: int) → int[source]¶ Get the number of edges in the grid.
Parameters: grid (int) – A grid identifier. Returns: The total number of grid edges. Return type: int
-
get_grid_edge_nodes
(grid: int, edge_nodes: numpy.ndarray) → numpy.ndarray[source]¶ Get the edge-node connectivity.
Parameters: - grid (int) – A grid identifier.
- edge_nodes (ndarray of int, shape (2 x nnodes,)) – A numpy array to place the edge-node connectivity. For each edge, connectivity is given as node at edge tail, followed by node at edge head.
Returns: The input numpy array that holds the edge-node connectivity.
Return type: ndarray of int
-
get_grid_face_count
(grid: int) → int[source]¶ Get the number of faces in the grid.
Parameters: grid (int) – A grid identifier. Returns: The total number of grid faces. Return type: int
-
get_grid_face_edges
(grid: int, face_edges: numpy.ndarray) → numpy.ndarray[source]¶ Get the face-edge connectivity.
Parameters: - grid (int) – A grid identifier.
- face_edges (ndarray of int) – A numpy array to place the face-edge connectivity.
Returns: The input numpy array that holds the face-edge connectivity.
Return type: ndarray of int
-
get_grid_face_nodes
(grid: int, face_nodes: numpy.ndarray) → numpy.ndarray[source]¶ Get the face-node connectivity.
Parameters: - grid (int) – A grid identifier.
- face_nodes (ndarray of int) – A numpy array to place the face-node connectivity. For each face, the nodes (listed in a counter-clockwise direction) that form the boundary of the face.
Returns: The input numpy array that holds the face-node connectivity.
Return type: ndarray of int
-
get_grid_node_count
(grid: int) → int[source]¶ Get the number of nodes in the grid.
Parameters: grid (int) – A grid identifier. Returns: The total number of grid nodes. Return type: int
-
get_grid_nodes_per_face
(grid: int, nodes_per_face: numpy.ndarray) → numpy.ndarray[source]¶ Get the number of nodes for each face.
Parameters: - grid (int) – A grid identifier.
- nodes_per_face (ndarray of int, shape (nfaces,)) – A numpy array to place the number of edges per face.
Returns: The input numpy array that holds the number of nodes per edge.
Return type: ndarray of int
-
get_grid_origin
(grid, origin)[source]¶ Get coordinates for the lower-left corner of the computational grid.
Parameters: - grid (int) – A grid identifier.
- origin (ndarray of float, shape (ndim,)) – A numpy array to hold the coordinates of the lower-left corner of the grid.
Returns: The input numpy array that holds the coordinates of the grid’s lower-left corner.
Return type: ndarray of float
-
get_grid_rank
(grid)[source]¶ Get number of dimensions of the computational grid.
Parameters: grid (int) – A grid identifier. Returns: Rank of the grid. Return type: int
-
get_grid_shape
(grid, shape)[source]¶ Get dimensions of the computational grid.
Parameters: - grid (int) – A grid identifier.
- shape (ndarray of int, shape (ndim,)) – A numpy array into which to place the shape of the grid.
Returns: The input numpy array that holds the grid’s shape.
Return type: ndarray of int
-
get_grid_size
(grid)[source]¶ Get the total number of elements in the computational grid.
Parameters: grid (int) – A grid identifier. Returns: Size of the grid. Return type: int
-
get_grid_spacing
(grid, spacing)[source]¶ Get distance between nodes of the computational grid.
Parameters: - grid (int) – A grid identifier.
- spacing (ndarray of float, shape (ndim,)) – A numpy array to hold the spacing between grid rows and columns.
Returns: The input numpy array that holds the grid’s spacing.
Return type: ndarray of float
-
get_grid_type
(grid)[source]¶ Get the grid type as a string.
Parameters: grid (int) – A grid identifier. Returns: Type of grid as a string. Return type: str
-
get_grid_x
(grid, x)[source]¶ Get coordinates of grid nodes in the x direction.
Parameters: - grid (int) – A grid identifier.
- x (ndarray of float, shape (nrows,)) – A numpy array to hold the x-coordinates of the grid node columns.
Returns: The input numpy array that holds the grid’s column x-coordinates.
Return type: ndarray of float
-
get_grid_y
(grid, y)[source]¶ Get coordinates of grid nodes in the y direction.
Parameters: - grid (int) – A grid identifier.
- y (ndarray of float, shape (ncols,)) – A numpy array to hold the y-coordinates of the grid node rows.
Returns: The input numpy array that holds the grid’s row y-coordinates.
Return type: ndarray of float
-
get_grid_z
(grid, z)[source]¶ Get coordinates of grid nodes in the z direction.
Parameters: - grid (int) – A grid identifier.
- z (ndarray of float, shape (nlayers,)) – A numpy array to hold the z-coordinates of the grid nodes layers.
Returns: The input numpy array that holds the grid’s layer z-coordinates.
Return type: ndarray of float
-
get_input_item_count
() → int[source]¶ Count of a model’s input variables.
Returns: The number of input variables. Return type: int
-
get_input_var_names
()[source]¶ List of a model’s input variables.
Input variable names must be CSDMS Standard Names, also known as long variable names.
Returns: The input variables for the model. Return type: list of str Notes
Standard Names enable the CSDMS framework to determine whether an input variable in one model is equivalent to, or compatible with, an output variable in another model. This allows the framework to automatically connect components.
Standard Names do not have to be used within the model.
-
get_output_item_count
() → int[source]¶ Count of a model’s output variables.
Returns: The number of output variables. Return type: int
-
get_output_var_names
()[source]¶ List of a model’s output variables.
Output variable names must be CSDMS Standard Names, also known as long variable names.
Returns: The output variables for the model. Return type: list of str
-
get_start_time
()[source]¶ Start time of the model.
Model times should be of type float.
Returns: The model start time. Return type: float
-
get_time_step
()[source]¶ Current time step of the model.
The model time step should be of type float.
Returns: The time step used in model. Return type: float
-
get_time_units
()[source]¶ Time units of the model.
Returns: The model time unit; e.g., days or s. Return type: float Notes
CSDMS uses the UDUNITS standard from Unidata.
-
get_value
(name, dest)[source]¶ Get a copy of values of the given variable.
This is a getter for the model, used to access the model’s current state. It returns a copy of a model variable, with the return type, size and rank dependent on the variable.
Parameters: - name (str) – An input or output variable name, a CSDMS Standard Name.
- dest (ndarray) – A numpy array into which to place the values.
Returns: The same numpy array that was passed as an input buffer.
Return type: ndarray
-
get_value_at_indices
(name, dest, indices)[source]¶ Get values at particular indices.
Parameters: - name (str) – An input or output variable name, a CSDMS Standard Name.
- dest (ndarray) – A numpy array into which to place the values.
- indices (array_like) – The indices into the variable array.
Returns: Value of the model variable at the given location.
Return type: array_like
-
get_value_ptr
(name: str) → numpy.ndarray[source]¶ Not possible, unable give reference to data structure in another process and possibly another machine
-
get_var_grid
(name)[source]¶ Get grid identifier for the given variable.
Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: The grid identifier. Return type: int
-
get_var_itemsize
(name)[source]¶ Get memory use for each array element in bytes.
Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: Item size in bytes. Return type: int
-
get_var_location
(name: str) → str[source]¶ Get the grid element type that the a given variable is defined on.
The grid topology can be composed of nodes, edges, and faces.
- node
- A point that has a coordinate pair or triplet: the most basic element of the topology.
- edge
- A line or curve bounded by two nodes.
- face
- A plane or surface enclosed by a set of edges. In a 2D horizontal application one may consider the word “polygon”, but in the hierarchy of elements the word “face” is most common.
Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: The grid location on which the variable is defined. Must be one of “node”, “edge”, or “face”. Return type: str Notes
CSDMS uses the ugrid conventions to define unstructured grids.
-
get_var_nbytes
(name)[source]¶ Get size, in bytes, of the given variable.
Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: The size of the variable, counted in bytes. Return type: int
-
get_var_type
(name)[source]¶ Get data type of the given variable.
Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: The Python variable type; e.g., str
,int
,float
.Return type: str
-
get_var_units
(name)[source]¶ Get units of the given variable.
Standard unit names, in lower case, should be used, such as
meters
orseconds
. Standard abbreviations, likem
for meters, are also supported. For variables with compound units, each unit name is separated by a single space, with exponents other than 1 placed immediately after the name, as inm s-1
for velocity,W m-2
for an energy flux, orkm2
for an area.Parameters: name (str) – An input or output variable name, a CSDMS Standard Name. Returns: The variable units. Return type: str Notes
CSDMS uses the UDUNITS standard from Unidata.
-
initialize
(filename)[source]¶ Perform startup tasks for the model.
Perform all tasks that take place before entering the model’s time loop, including opening files and initializing the model state. Model inputs are read from a text-based configuration file, specified by filename.
Parameters: config_file (str, optional) – The path to the model configuration file. Notes
Models should be refactored, if necessary, to use a configuration file. CSDMS does not impose any constraint on how configuration files are formatted, although YAML is recommended. A template of a model’s configuration file with placeholder values is used by the BMI.
-
set_value
(name, values)[source]¶ Specify a new value for a model variable.
This is the setter for the model, used to change the model’s current state. It accepts, through src, a new value for a model variable, with the type, size and rank of src dependent on the variable.
Parameters: - var_name (str) – An input or output variable name, a CSDMS Standard Name.
- src (array_like) – The new value for the specified variable.
-
set_value_at_indices
(name, inds, src)[source]¶ Specify a new value for a model variable at particular indices.
Parameters: - var_name (str) – An input or output variable name, a CSDMS Standard Name.
- indices (array_like) – The indices into the variable array.
- src (array_like) – The new value for the specified variable.
-
update
()[source]¶ Advance model state by one time step.
Perform all tasks that take place within one pass through the model’s time loop. This typically includes incrementing all of the model’s state variables. If the model’s state variables don’t change in time, then they can be computed by the
initialize()
method and this method can return with no action.
-
-
grpc4bmi.bmi_grpc_client.
handle_error
(exc)[source]¶ Parsers DebugInfo (https://github.com/googleapis/googleapis/blob/07244bb797ddd6e0c1c15b02b4467a9a5729299f/google/rpc/error_details.proto#L46-L52) from the trailing metadata of a grpc.RpcError
Parameters: exc (grpc.RpcError) – Exception to handle Raises: original exception or RemoteException
grpc4bmi.bmi_grpc_legacy_server module¶
-
class
grpc4bmi.bmi_grpc_legacy_server.
BmiLegacyServer02
(model, debug=False)[source]¶ Bases:
grpc4bmi.bmi_pb2_grpc.BmiServiceServicer
BMI Server class, wrapping an existing python implementation and exposing it via GRPC across the memory space (to listening client processes). The class takes a package, module and class name and instantiates the BMI implementation by assuming a default constructor with no arguments.
For models implementing the bmi interface defined https://pypi.org/project/basic-modeling-interface/0.2/
Parameters: - model – Bmi model object which must be wrapped by grpc
- debug – If true then returns stacktrace in an error response. The stacktrace is returned in the trailing metadata as a DebugInfo (https://github.com/googleapis/googleapis/blob/07244bb797ddd6e0c1c15b02b4467a9a5729299f/google/rpc/error_details.proto#L46-L52) message.
grpc4bmi.bmi_grpc_server module¶
-
class
grpc4bmi.bmi_grpc_server.
BmiServer
(model, debug=False)[source]¶ Bases:
grpc4bmi.bmi_pb2_grpc.BmiServiceServicer
BMI Server class, wrapping an existing python implementation and exposing it via GRPC across the memory space (to listening client processes). The class takes a package, module and class name and instantiates the BMI implementation by assuming a default constructor with no arguments.
Parameters: - model – Bmi model object which must be wrapped by grpc
- debug – If true then returns stacktrace in an error response. The stacktrace is returned in the trailing metadata as a DebugInfo (https://github.com/googleapis/googleapis/blob/07244bb797ddd6e0c1c15b02b4467a9a5729299f/google/rpc/error_details.proto#L46-L52) message.
grpc4bmi.bmi_r_model module¶
grpc4bmi.reserve module¶
Helpers to reserve numpy arrays for use in some of the Bmi methods as output argument
-
grpc4bmi.reserve.
reserve_grid_nodes
(model: bmipy.bmi.Bmi, grid_id: int, dim_index: int) → numpy.ndarray[source]¶ Reserve dest for
bmipy.Bmi.get_grid_x()
,bmipy.Bmi.get_grid_y()
andbmipy.Bmi.get_grid_z()
The dim_index goes x,y,z and model.get_grid_shape goes z,y,x or y,x so index is inverted
-
grpc4bmi.reserve.
reserve_grid_padding
(model: bmipy.bmi.Bmi, grid_id: int) → numpy.ndarray[source]¶ Reserve dest for
bmipy.Bmi.get_grid_spacing()
andbmipy.Bmi.get_grid_origin()
-
grpc4bmi.reserve.
reserve_grid_shape
(model: bmipy.bmi.Bmi, grid_id: int) → numpy.ndarray[source]¶ Reserve shape for
bmipy.Bmi.get_grid_shape()