srun --pty -t 0:30:00 --partition=<machine>-debug bashsinfo
greytail{thornton}% sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
swan01-debug* up 30:00 1 idle swan01.cpu.stats.ox.ac.uk
swan02-debug up 30:00 1 idle swan02.cpu.stats.ox.ac.uk
swan03-debug up 30:00 1 mix swan03.cpu.stats.ox.ac.uk
swan11-debug up 30:00 1 idle swan11.cpu.stats.ox.ac.uk
swan12-debug up 30:00 1 idle swan12.cpu.stats.ox.ac.uk
grey01-debug up 30:00 1 idle grey01.cpu.stats.ox.ac.uk
greyheron-debug up 30:00 1 idle greyheron.stats.ox.ac.uk
greyplover-debug up 30:00 1 idle greyplover.stats.ox.ac.uk
greywagtail-debug up 30:00 1 idle greywagtail.stats.ox.ac.uk
greypartridge-debug up 30:00 1 idle greypartridge.stats.ox.ac.uk
greyostrich-debug up 30:00 1 mix greyostrich.stats.ox.ac.uk
grey-standard up 7-00:00:00 4 idle greyheron.stats.ox.ac.uk,greypartridge.stats.ox.ac.uk,greyplover.stats.ox.ac.uk,greywagtail.stats.ox.ac.uk
grey-fast up 7-00:00:00 1 idle grey01.cpu.stats.ox.ac.uk
grey-gpu up 7-00:00:00 1 mix greyostrich.stats.ox.ac.uk
swan-1hr up 1:00:00 1 mix swan03.cpu.stats.ox.ac.uk
swan-1hr up 1:00:00 2 idle swan01.cpu.stats.ox.ac.uk,swan02.cpu.stats.ox.ac.uk
swan-6hrs up 6:00:00 1 mix swan03.cpu.stats.ox.ac.uk
swan-6hrs up 6:00:00 1 idle swan02.cpu.stats.ox.ac.uk
swan-2day up 2-00:00:00 1 mix swan03.cpu.stats.ox.ac.uk
swan-large up 7-00:00:00 2 idle swan11.cpu.stats.ox.ac.uk,swan12.cpu.stats.ox.ac.uk
stats-7day up 7-00:00:00 1 idle emu.stats.ox.ac.uk
squeue
greytail{thornton}% squeue
JOBID PARTITION NAME ST TIME NODES NODELIST(REASON)
845457 swan03-de bash R 14:54 1 swan03.cpu.stats.ox.ac.uk
845455 swan03-de bash R 17:22 1 swan03.cpu.stats.ox.ac.uk
845215 swan-2day SCI R 6:33:10 1 swan03.cpu.stats.ox.ac.uk
845400 grey-gpu job01 R 3:06:28 1 greyostrich.stats.ox.ac.uk
845397 grey-gpu job01 R 3:10:17 1 greyostrich.stats.ox.ac.uk
841508 grey-gpu eff_n_12 R 1-07:35:22 1 greyostrich.stats.ox.ac.uk
838246 grey-gpu eff_n R 2-18:29:05 1 greyostrich.stats.ox.ac.uk
launch.sh on head nodepreamble = """#!/bin/bash
#SBATCH -A oxwasp
#SBATCH --time=20:00:00
#SBATCH [email protected]
#SBATCH --mail-type=ALL
#SBATCH --partition=grey-standard
#SBATCH --nodelist="greyheron.stats.ox.ac.uk"
#SBATCH --output="/tmp/slurm-JT-output"
#SBATCH --mem "15G"
#SBATCH --cpus-per-task 10
#SBATCH --gres=gpu:1
"""
sbatch launch.sh
#!/bin/bash
#SBATCH -A oxwasp # Account to be used, e.g. academic, acadrel, aims, bigbayes, opig, oxcsml, oxwasp, rstudent, statgen, statml, visitors
#SBATCH -J job01 # Job name, can be useful but optional
#SBATCH --time=7-00:00:00 # Walltime - run time of just 30 seconds
#SBATCH [email protected] # set email address to use, change to your own email address instead of "me"
#SBATCH --mail-type=ALL # Caution: fine for debug, but not if handling hundreds of jobs!
#SBATCH --partition=grey-gpu # Select the swan one hour partition
#SBATCH --nodelist=greyostrich.stats.ox.ac.uk
#SBATCH --output="/tmp/slurm-JT-output"
#SBATCH --mem 20g
#SBATCH --cpus-per-task 5
#SBATCH --gres=gpu:1
cd /data/greyostrich/oxwasp/oxwasp18/thornton
source ./miniconda3/bin/activate bridge
pip install tornado
python -m ipykernel install --user --name=bridge
python -m jupyter notebook --ip greyostrich.stats.ox.ac.uk --no-browser --port 8888
parmiko library for ssh utilsgreytail via paramiko
client = paramiko.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname='greytail')
command = 'sinfo'
stdin, stdout, stderr = client.exec_command(command)
lines = stdout.readlines()
preamble = """#!/bin/bash
#SBATCH -A oxwasp
#SBATCH --time=20:00:00
#SBATCH [email protected]
#SBATCH --mail-type=ALL
#SBATCH --partition=grey-standard
#SBATCH --nodelist="greyheron.stats.ox.ac.uk"
#SBATCH --output="/tmp/slurm-JT-output"
#SBATCH --mem "15G"
#SBATCH --cpus-per-task 10
"""
command = preamble + "\n" + """
cd /data/localhost/oxwasp/oxwasp18/thornton
touch test_new_file2.txt
"""
slurm_wd = '/data/thornton'
slurm_file = 'test_batch.sh'
ftp = client.open_sftp()
ftp.chdir(slurm_wd)
file=ftp.file(slurm_file, "w", -1)
file.write(command)
file.flush()
ftp.close()
sbatch_cmd = 'sbatch {0}'.format(os.path.join(slurm_wd, slurm_file))
stdin, stdout, stderr = client.exec_command(sbatch_cmd)
virtualenv,conda, and docker – see here for a discussion. This post will focus on conda, and give a few practical commands to get up and running.
sh curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o "$conda_dir/miniconda.sh"
sh -x miniconda.sh -b -p "./miniconda3"
Note: The default conda set-up requires editing the .bashrc file and setting environment variables to point to the conda executable. This is a pain when dealing with multiple servers, fortunately there are ways around this and the commands given here will not rely on editing the .bashrc.
./miniconda3/bin/conda create -n conda_venv python=3.8source ./miniconda3/bin/activate conda_venvconda deactivate conda_venvconda install -c anaconda numpypip install numpy Export installed dependencies to file
conda env export > env.yaml
conda env create -f environment.ymlpip install jupyterpython -m ipykernel install --user --name=conda_venv
conda install pywin32Instructions are based on this guide with some of the kinks worked out for common problems.
I am using conda to manage dependencies and using the MS Visual Studio 2019 IDE for C++.
Code here
conda create -n venv
conda activate venv
conda install pip
pip install pybind11
Project > superfastcode Properties| Tab | Property | Value |
|---|---|---|
| General | General > Target Name | Specify the name of the module as you want to refer to it from Python in from…import statements. You use this same name in the C++ when defining the module for Python. If you want to use the name of the project as the module name, leave the default value of $(ProjectName). |
| General (or Advanced) > Target Extension | .pyd | |
| Project Defaults > Configuration Type | Dynamic Library (.dll) | |
| C/C++ > General | Additional Include Directories | Add the Python include folder as appropriate for your installation, for example C:\Users\james\Miniconda3\envs\venv\include. |
| C/C++ > Code Generation | Runtime Library | Multi-threaded DLL (/MD) (see Warning below) |
| Linker > General | Additional Library Directories | Add the Python libs folder containing .lib files as appropriate for your installation, for example, C:\Users\james\Miniconda3\venv\libs. (Be sure to point to the libs folder that contains .lib files, and not the Lib folder that contains .py files.) |
#include <cmath>
#include <pybind11/pybind11.h>
const double e = 2.7182818284590452353602874713527;
double sinh_impl(double x) {
return (1 - pow(e, (-2 * x))) / (2 * pow(e, -x));
}
double cosh_impl(double x) {
return (1 + pow(e, (-2 * x))) / (2 * pow(e, -x));
}
double tanh_impl(double x) {
return sinh_impl(x) / cosh_impl(x);
}
namespace py = pybind11;
PYBIND11_MODULE(superfastcode, m) {
m.def("fast_tanh", &tanh_impl, R"pbdoc(
Compute a hyperbolic tangent of a single argument expressed in radians.
)pbdoc");
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}
ctrl+shift+B or Build > build solution, ensuring the correct configuration.setup.pyimport os, sys
from distutils.core import setup, Extension
from distutils import sysconfig
cpp_args = ['-std=c++11', '-stdlib=libc++', '-mmacosx-version-min=10.7']
sfc_module = Extension(
'superfastcode', sources=['module.cpp'],
include_dirs=['pybind11/include'],
language='c++',
extra_compile_args=cpp_args,
)
setup(
name='superfastcode',
version='1.0',
description='Python package with superfastcode C++ extension (PyBind11)',
ext_modules=[sfc_module],
)
pip install . python
>> from superfastcode import fast_tanh
NumPy arrays may be accessed through the protocol buffer. See more examples in the pybind11 docs here.
#include <cmath>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
const double e = 2.7182818284590452353602874713527;
double sinh_impl(double x) {
return (1 - pow(e, (-2 * x))) / (2 * pow(e, -x));
}
double cosh_impl(double x) {
return (1 + pow(e, (-2 * x))) / (2 * pow(e, -x));
}
double tanh_impl(double x) {
return sinh_impl(x) / cosh_impl(x);
}
namespace py = pybind11;
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
py::buffer_info buf1 = input1.request(), buf2 = input2.request();
if (buf1.ndim != 1 || buf2.ndim != 1)
throw std::runtime_error("Number of dimensions must be one");
if (buf1.size != buf2.size)
throw std::runtime_error("Input shapes must match");
/* No pointer is passed, so NumPy will allocate the buffer */
auto result = py::array_t<double>(buf1.size);
py::buffer_info buf3 = result.request();
double* ptr1 = (double*)buf1.ptr,
* ptr2 = (double*)buf2.ptr,
* ptr3 = (double*)buf3.ptr;
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
ptr3[idx] = ptr1[idx] + ptr2[idx];
return result;
}
PYBIND11_MODULE(superfastcode, m) {
m.def("fast_tanh", &tanh_impl, R"pbdoc(
Compute a hyperbolic tangent of a single argument expressed in radians.
)pbdoc");
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
#ifdef VERSION_INFO
m.attr("__version__") = VERSION_INFO;
#else
m.attr("__version__") = "dev";
#endif
}
python
>>> import numpy as np
>>> from superfastcode import add_arrays
>>> a = np.array([1.,2.,3.])
>>> b = a.copy()
>>> add_arrays(a,b)
array([2., 4., 6.])