modin.pandas.read_xml

modin.pandas.read_xml(path_or_buffer: FilePath | ReadBuffer[bytes] | ReadBuffer[str], *, xpath: str = './*', namespaces: dict[str, str] | None = None, elems_only: bool = False, attrs_only: bool = False, names: Sequence[str] | None = None, dtype: DtypeArg | None = None, converters: ConvertersArg | None = None, parse_dates: ParseDatesArg | None = None, encoding: str | None = 'utf-8', parser: XMLParsers = 'lxml', stylesheet: FilePath | ReadBuffer[bytes] | ReadBuffer[str] | None = None, iterparse: dict[str, list[str]] | None = None, compression: CompressionOptions = 'infer', storage_options: StorageOptions = None, dtype_backend: DtypeBackend | NoDefault = _NoDefault.no_default) pd.DataFrame[source]

Read XML document into a DataFrame object.

Parameters:
  • path_or_buffer (str, path object, or file-like object) – String, path object (implementing os.PathLike[str]), or file-like object implementing a read() function. The string can be a path. The string can further be a URL. Valid URL schemes include http, ftp, s3, and file.

  • xpath (str, optional, default ‘./*’) – The XPath to parse required set of nodes for migration to DataFrame.``XPath`` should return a collection of elements and not a single element. Note: The etree parser supports limited XPath expressions. For more complex XPath, use lxml which requires installation.

  • namespaces (dict, optional) – The namespaces defined in XML document as dicts with key being namespace prefix and value the URI. There is no need to include all namespaces in XML, only the ones used in xpath expression. Note: if XML document uses default namespace denoted as xmlns=’<URI>’ without a prefix, you must assign any temporary namespace prefix such as ‘doc’ to the URI in order to parse underlying nodes and/or attributes.

  • elems_only (bool, optional, default False) – Parse only the child elements at the specified xpath. By default, all child elements and non-empty text nodes are returned.

  • attrs_only (bool, optional, default False) – Parse only the attributes at the specified xpath. By default, all attributes are returned.

  • names (list-like, optional) – Column names for DataFrame of parsed XML data. Use this parameter to rename original element names and distinguish same named elements and attributes.

  • dtype (Type name or dict of column -> type, optional) – Data type for data or columns. E.g. {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.

  • converters (dict, optional) – Dict of functions for converting values in certain columns. Keys can either be integers or column labels.

  • parse_dates (bool or list of int or names or list of lists or dict, default False) – Identifiers to parse index or columns to datetime. The behavior is as follows: - boolean. If True -> try parsing the index. - list of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column. - list of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column. - dict, e.g. {‘foo’ : [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’

  • encoding (str, optional, default ‘utf-8’) – Encoding of XML document.

  • parser ({‘lxml’,’etree’}, default ‘lxml’) – Parser module to use for retrieval of data. Only ‘lxml’ and ‘etree’ are supported. With ‘lxml’ more complex XPath searches and ability to use XSLT stylesheet are supported.

  • stylesheet (str, path object or file-like object) – A URL, file-like object, or a string path containing an XSLT script. This stylesheet should flatten complex, deeply nested XML documents for easier parsing. To use this feature you must have lxml module installed and specify ‘lxml’ as parser. The xpath must reference nodes of transformed XML document generated after XSLT transformation and not the original XML document. Only XSLT 1.0 scripts and not later versions is currently supported.

  • iterparse (dict, optional) – The nodes or attributes to retrieve in iterparsing of XML document as a dict with key being the name of repeating element and value being list of elements or attribute names that are descendants of the repeated element. Note: If this option is used, it will replace xpath parsing and unlike xpath, descendants do not need to relate to each other but can exist any where in document under the repeating element. This memory- efficient method should be used for very large XML files (500MB, 1GB, or 5GB+). For example, {“row_element”: [“child_elem”, “attr”, “grandchild_elem”]}.

  • compression (str or dict, default ‘infer’) – For on-the-fly decompression of on-disk data. If ‘infer’ and ‘path_or_buffer’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, ‘.tar’, ‘.tar.gz’, ‘.tar.xz’ or ‘.tar.bz2’ (otherwise no compression). If using ‘zip’ or ‘tar’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key ‘method’ set to one of {‘zip’, ‘gzip’, ‘bz2’, ‘zstd’, ‘xz’, ‘tar’} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, zstandard.ZstdDecompressor, lzma.LZMAFile or tarfile.TarFile, respectively. As an example, the following could be passed for Zstandard decompression using a custom compression dictionary: compression={‘method’: ‘zstd’, ‘dict_data’: my_compression_dict}.

  • storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc. For HTTP(S) URLs the key-value pairs are forwarded to urllib.request.Request as header options. For other URLs (e.g. starting with “s3://”, and “gcs://”) the key-value pairs are forwarded to fsspec.open. Please see fsspec and urllib for more details, and for more examples on storage options refer here.

  • dtype_backend ({‘numpy_nullable’, ‘pyarrow’}) – Back-end data type applied to the resultant DataFrame (still experimental). If not specified, the default behavior is to not use nullable data types. If specified, the behavior is as follows: - “numpy_nullable”: returns nullable-dtype-backed DataFrame - “pyarrow”: returns pyarrow-backed nullable ArrowDtype DataFrame

Returns:

A DataFrame.

Return type:

df

See also

read_json

Convert a JSON string to pandas object.

read_html

Read HTML tables into a list of DataFrame objects.

Notes

This method is best designed to import shallow XML documents in following format which is the ideal fit for the two-dimensions of a DataFrame (row by column).

<root>
    <row>
      <column1>data</column1>
      <column2>data</column2>
      <column3>data</column3>
      ...
   </row>
   <row>
      ...
   </row>
   ...
</root>
Copy

As a file format, XML documents can be designed any way including layout of elements and attributes as long as it conforms to W3C specifications. Therefore, this method is a convenience handler for a specific flatter design and not all possible XML structures.

However, for more complex XML documents, stylesheet allows you to temporarily redesign original document with XSLT (a special purpose language) for a flatter version for migration to a DataFrame.

This function will always return a single DataFrame or raise exceptions due to issues with XML document, xpath, or other parameters.

See the read_xml documentation in the IO section of the docs for more information in using this method to parse XML files to DataFrames.

Examples

>>> from io import StringIO
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data xmlns="http://example.com">
... <row>
... <shape>square</shape>
... <degrees>360</degrees>
... <sides>4.0</sides>
... </row>
... <row>
... <shape>circle</shape>
... <degrees>360</degrees>
... <sides/>
... </row>
... <row>
... <shape>triangle</shape>
... <degrees>180</degrees>
... <sides>3.0</sides>
... </row>
... </data>'''
Copy
>>> df = pd.read_xml(StringIO(xml))
>>> df
      shape  degrees  sides
0    square      360    4.0
1    circle      360    NaN
2  triangle      180    3.0
Copy
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <data>
... <row shape="square" degrees="360" sides="4.0"/>
... <row shape="circle" degrees="360"/>
... <row shape="triangle" degrees="180" sides="3.0"/>
... </data>'''
Copy
>>> df = pd.read_xml(StringIO(xml), xpath=".//row")
>>> df
      shape  degrees  sides
0    square      360    4.0
1    circle      360    NaN
2  triangle      180    3.0
Copy
>>> xml = '''<?xml version='1.0' encoding='utf-8'?>
... <doc:data xmlns:doc="https://example.com">
... <doc:row>
...     <doc:shape>square</doc:shape>
...     <doc:degrees>360</doc:degrees>
...     <doc:sides>4.0</doc:sides>
... </doc:row>
... <doc:row>
...     <doc:shape>circle</doc:shape>
...     <doc:degrees>360</doc:degrees>
...     <doc:sides/>
... </doc:row>
... <doc:row>
...     <doc:shape>triangle</doc:shape>
...     <doc:degrees>180</doc:degrees>
...     <doc:sides>3.0</doc:sides>
... </doc:row>
... </doc:data>'''
Copy
>>> df = pd.read_xml(
...     StringIO(xml),
...     xpath="//doc:row",
...     namespaces={"doc": "https://example.com"},
... )
>>> df
      shape  degrees  sides
0    square      360    4.0
1    circle      360    NaN
2  triangle      180    3.0
Copy
>>> xml_data = '''
...         <data>
...         <row>
...             <index>0</index>
...             <a>1</a>
...             <b>2.5</b>
...             <c>True</c>
...             <d>a</d>
...             <e>2019-12-31 00:00:00</e>
...         </row>
...         <row>
...             <index>1</index>
...             <b>4.5</b>
...             <c>False</c>
...             <d>b</d>
...             <e>2019-12-31 00:00:00</e>
...         </row>
...         </data>
...         '''
Copy
>>> df = pd.read_xml(
...     StringIO(xml_data), dtype_backend="numpy_nullable", parse_dates=["e"]
... )
>>> df
   index    a    b      c  d          e
0      0  1.0  2.5   True  a 2019-12-31
1      1  NaN  4.5  False  b 2019-12-31
Copy