Module H5P

HDF5 property list interface.

Functional API

h5py.h5p.create(PropClassID cls) → PropID

Create a new property list as an instance of a class; classes are:

Base classes

class h5py.h5p.PropID

Bases: h5py._objects.ObjectID

Base class for all property lists and classes

equal(PropID plist) → BOOL

Compare this property list (or class) to another for equality.

class h5py.h5p.PropClassID

Bases: h5py.h5p.PropID

An HDF5 property list class.

  • Hashable: Yes, by identifier
  • Equality: Logical H5P comparison
class h5py.h5p.PropInstanceID

Bases: h5py.h5p.PropID

Base class for property list instance objects. Provides methods which are common across all HDF5 property list classes.

  • Hashable: No
  • Equality: Logical H5P comparison
copy() → PropList newid

Create a new copy of an existing property list object.

get_class() → PropClassID

Determine the class of a property list object.

class h5py.h5p.PropCreateID

Bases: h5py.h5p.PropInstanceID

Generic object creation property list.

class h5py.h5p.PropCopyID

Bases: h5py.h5p.PropInstanceID

Generic object copy property list

get_copy_object() → UINT flags

Get copy process flags. Legal flags are h5o.COPY*.

set_copy_object(UINT flags)

Set flags for object copying process. Legal flags are from the h5o.COPY* family:

h5o.COPY_SHALLOW_HIERARCHY_FLAG
Copy only immediate members of a group.
h5o.COPY_EXPAND_SOFT_LINK_FLAG
Expand soft links into new objects.
h5o.COPY_EXPAND_EXT_LINK_FLAG
Expand external link into new objects.
h5o.COPY_EXPAND_REFERENCE_FLAG
Copy objects that are pointed to by references.
h5o.COPY_WITHOUT_ATTR_FLAG
Copy object without copying attributes.

File creation

class h5py.h5p.PropFCID

Bases: h5py.h5p.PropCreateID

File creation property list.

Get tracking and indexing of creation order for links added to this group

get_sizes() → TUPLE sizes

Determine addressing offsets and lengths for objects in an HDF5 file, in bytes. Return value is a 2-tuple with values:

  1. UINT Address offsets
  2. UINT Lengths
get_userblock() → LONG size

Determine the user block size, in bytes.

get_version() → TUPLE version_info

Determine version information of various file attributes. Elements are:

  1. UINT Super block version number
  2. UINT Freelist version number
  3. UINT Symbol table version number
  4. UINT Shared object header version number

Set tracking and indexing of creation order for links added to this group

flags – h5p.CRT_ORDER_TRACKED, h5p.CRT_ORDER_INDEXED

set_sizes(UINT addr, UINT size)

Set the addressing offsets and lengths for objects in an HDF5 file, in bytes.

set_userblock(INT/LONG size)

Set the file user block size, in bytes. Must be a power of 2, and at least 512.

File access

class h5py.h5p.PropFAID

Bases: h5py.h5p.PropInstanceID

File access property list

get_cache() → TUPLE cache info

Get the metadata and raw data chunk cache settings. See the HDF5 docs for element definitions. Return is a 4-tuple with entries:

  1. INT mdc: Number of metadata objects
  2. INT rdcc: Number of raw data chunks
  3. UINT rdcc_nbytes: Size of raw data cache
  4. DOUBLE rdcc_w0: Preemption policy for data cache.
get_driver() → INT driver code

Return an integer identifier for the driver used by this list. Although HDF5 implements these as full-fledged objects, they are treated as integers by Python. Built-in drivers identifiers are listed in module h5fd; they are:

get_fapl_core() → TUPLE core_settings

Determine settings for the h5fd.CORE (memory-resident) file driver. Tuple elements are:

  1. UINT “increment”: Chunk size for new memory requests
  2. BOOL “backing_store”: If True, write the memory contents to disk when the file is closed.
get_fapl_family() → TUPLE info

Determine family driver settings. Tuple values are:

  1. UINT memb_size
  2. PropFAID memb_fapl or None
get_fclose_degree() → INT close_degree - h5fd. Get the file-close degree, which determines library behavior when a file is closed when objects are still open. Legal values:
get_libver_bounds() -> (INT low, INT high)

Get the compatibility level for file format. Returned values are from:

get_mdc_config() → CacheConfig Returns an object that stores all the information about the meta-data cache configuration
get_sieve_buf_size() → UINT size

Get the current maximum size of the data sieve buffer (in bytes).

set_cache(INT mdc, INT rdcc, UINT rdcc_nbytes, DOUBLE rdcc_w0)

Set the metadata (mdc) and raw data chunk (rdcc) cache properties. See the HDF5 docs for a full explanation.

set_fapl_core(UINT increment=64k, BOOL backing_store=True)

Use the h5fd.CORE (memory-resident) file driver.

increment
Chunk size for new memory requests (default 1 meg)
backing_store
If True (default), memory contents are associated with an on-disk file, which is updated when the file is closed. Set to False for a purely in-memory file.
set_fapl_family(UINT memb_size=2**31-1, PropFAID memb_fapl=None)

Set up the family driver.

memb_size
Member file size
memb_fapl
File access property list for each member access
set_fapl_log(STRING logfile, UINT flags, UINT buf_size)

Enable the use of the logging driver. See the HDF5 documentation for details. Flag constants are stored in module h5fd.

set_fapl_sec2()

Select the “section-2” driver (h5fd.SEC2).

set_fapl_stdio()

Select the “stdio” driver (h5fd.STDIO)

set_fclose_degree(INT close_degree)

Set the file-close degree, which determines library behavior when a file is closed when objects are still open. Legal values:

set_libver_bounds(INT low, INT high)

Set the compatibility level for file format. Legal values are:

set_mdc_config(CacheConfig) → None Returns an object that stores all the information about the meta-data cache configuration
set_sieve_buf_size(UINT size)

Set the maximum size of the data sieve buffer (in bytes). This buffer can improve I/O performance for hyperslab I/O, by combining reads and writes into blocks of the given size. The default is 64k.

Dataset creation

class h5py.h5p.PropDCID

Bases: h5py.h5p.PropOCID

Dataset creation property list.

all_filters_avail() → BOOL

Determine if all the filters in the pipelist are available to the library.

fill_value_defined() → INT fill_status

Determine the status of the dataset fill value. Return values are:

get_alloc_time() → INT alloc_time

Get the storage space allocation time. One of h5d.ALLOC_TIME*.

get_chunk() → TUPLE chunk_dimensions

Obtain the dataset chunk size, as a tuple.

get_fill_time() → INT

Determine when fill values are written to the dataset. Legal values (defined in module h5d) are:

get_fill_value(NDARRAY value)

Read the dataset fill value into a NumPy array. It will be converted to match the array dtype. If the array has nonzero rank, only the first element will contain the value.

get_filter(UINT filter_idx) → TUPLE filter_info

Get information about a filter, identified by its index. Tuple elements are:

  1. INT filter code (h5z.FILTER*)
  2. UINT flags (h5z.FLAG*)
  3. TUPLE of UINT values; filter aux data (16 values max)
  4. STRING name of filter (256 chars max)
get_filter_by_id(INT filter_code) → TUPLE filter_info or None

Get information about a filter, identified by its code (one of h5z.FILTER*). If the filter doesn’t exist, returns None. Tuple elements are:

  1. UINT flags (h5z.FLAG*)
  2. TUPLE of UINT values; filter aux data (16 values max)
  3. STRING name of filter (256 chars max)
get_layout() → INT layout_code

Determine the storage strategy of a dataset; legal values are:

get_nfilters() → INT

Determine the number of filters in the pipeline.

remove_filter(INT filter_class)

Remove a filter from the pipeline. The class code is one of h5z.FILTER*.

set_alloc_time(INT alloc_time)

Set the storage space allocation time. One of h5d.ALLOC_TIME*.

set_chunk(TUPLE chunksize)

Set the dataset chunk size. It’s up to you to provide values which are compatible with your dataset.

set_deflate(UINT level=5)

Enable deflate (gzip) compression, at the given level. Valid levels are 0-9, default is 5.

set_fill_time(INT fill_time)

Define when fill values are written to the dataset. Legal values (defined in module h5d) are:

set_fill_value(NDARRAY value)

Set the dataset fill value. The object provided should be an 0-dimensional NumPy array; otherwise, the value will be read from the first element.

set_filter(INT filter_code, UINT flags=0, TUPLE values=None)

Set a filter in the pipeline. Params are:

filter_code

One of the following:

flags
Bit flags (h5z.FLAG*) setting filter properties
values
TUPLE of UINTs giving auxiliary data for the filter
set_fletcher32()

Enable Fletcher32 error correction on this list.

set_layout(INT layout_code)

Set dataset storage strategy; legal values are:

set_scaleoffset(H5Z_SO_scale_type_t scale_type, INT scale_factor)

Enable scale/offset (usually lossy) compression; lossless (e.g. gzip) compression and other filters may be applied on top of this.

Note that error detection (i.e. fletcher32) cannot precede this in the filter chain, or else all reads on lossily-compressed data will fail.

set_shuffle()

Enable to use of the shuffle filter. Use this immediately before the deflate filter to increase the compression ratio.

set_szip(UINT options, UINT pixels_per_block)

Enable SZIP compression. See the HDF5 docs for argument meanings, and general restrictions on use of the SZIP format.

Group creation

class h5py.h5p.PropGCID

Bases: h5py.h5p.PropOCID

Group creation property list

Get tracking and indexing of creation order for links added to this group

Set tracking and indexing of creation order for links added to this group

flags – h5p.CRT_ORDER_TRACKED, h5p.CRT_ORDER_INDEXED

Module constants

Predefined classes

h5py.h5p.DEFAULT
h5py.h5p.FILE_CREATE
h5py.h5p.FILE_ACCESS
h5py.h5p.DATASET_CREATE
h5py.h5p.DATASET_XFER
h5py.h5p.OBJECT_COPY
h5py.h5p.GROUP_CREATE

Table Of Contents

Previous topic

Module H5O

Next topic

Module H5R

This Page