Chuyển đổi nhiều tệp văn bản thành csv Python

Khi sắp xếp dữ liệu với Pandas, cuối cùng bạn sẽ làm việc với nhiều loại nguồn dữ liệu. Chúng tôi đã giới thiệu cách để Pandas tương tác với bảng tính Excel, cơ sở dữ liệu sql, v.v. Trong hướng dẫn hôm nay, chúng ta sẽ tìm hiểu cách sử dụng Python3 để nhập văn bản (. txt) vào Pandas DataFrames. Quá trình như mong đợi là tương đối đơn giản để làm theo

Show

Thí dụ. Đọc một tệp văn bản vào DataFrame trong Python

Giả sử rằng bạn có một tệp văn bản có tên là các cuộc phỏng vấn. txt, chứa dữ liệu được phân định bằng tab

Chúng tôi sẽ tiếp tục và tải tệp văn bản bằng pd. read_csv()

import pandas as pd

hr = pd.read_csv('interviews.txt', names =['month', 'first', 'second'])

hr.head()

Kết quả sẽ trông hơi méo vì bạn chưa chỉ định tab làm dấu phân cách cột của mình

Chuyển đổi nhiều tệp văn bản thành csv Python

Chỉ định chuỗi thoát /t làm dấu phân cách của bạn, sẽ sửa dữ liệu DataFrame của bạn

hr = pd.read_csv('interviews.txt', delimiter='\t', names =['month', 'first', 'second'])

hr.head()
Chuyển đổi nhiều tệp văn bản thành csv Python

Nhập nhiều tệp văn bản vào Python Pandas DataFrames

Đây là một trường hợp thú vị hơn, trong đó bạn cần nhập một số tệp văn bản nằm trong một thư mục trong hệ điều hành của mình vào Khung dữ liệu Pandas. Tệp văn bản của bạn có thể chứa dữ liệu được trích xuất từ ​​hệ thống, cơ sở dữ liệu của bên thứ ba, v.v.

Trước khi tiếp tục, chúng ta cần nhập một vài thư viện Python

import os, glob

Bây giờ sử dụng đoạn mã sau

# Define relative path to folder containing the text files

files_folder = "../data/"
files = []

# Create a dataframe list by using a list comprehension

files = [pd.read_csv(file, delimiter='\t', names =['month', 'first', 'second'] ) for file in glob.glob(os.path.join(files_folder ,"*.txt"))]

# Concatenate the list of DataFrames into one
files_df = pd.concat(files)

Khi bạn đã điền DataFrame của mình, bạn có thể phân tích thêm và trực quan hóa dữ liệu của mình bằng Pandas

Ghi chú. Sau khi bạn thay đổi ký tự tách danh sách cho máy tính của mình, tất cả các chương trình sẽ sử dụng ký tự mới làm dấu tách danh sách. Bạn có thể thay đổi ký tự trở lại ký tự mặc định bằng cách làm theo quy trình tương tự

The pandas I/O API is a set of top level

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
05 functions accessed like that generally return a pandas object. The corresponding
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
07 functions are object methods that are accessed like . Below is a table containing available
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
09 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
10

Format Type

Data Description

Reader

Writer

text

CSV

text

Fixed-Width Text File

text

JSON

text

HTML

text

LaTeX

text

XML

text

Local clipboard

binary

MS Excel

binary

OpenDocument

binary

HDF5 Format

binary

định dạng lông vũ

binary

Parquet Format

binary

ORC Format

binary

Stata

binary

SAS

binary

SPSS

binary

Python Pickle Format

SQL

SQL

SQL

Google BigQuery

is an informal performance comparison for some of these IO methods

Note

For examples that use the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
11 class, make sure you import it with
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
12 for Python 3

CSV & text files

The workhorse function for reading text files (a. k. a. flat files) is . See the for some advanced strategies

Parsing options

accepts the following common arguments

Basic

filepath_or_buffer various

Either a path to a file (a , , or

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
17), URL (including http, ftp, and S3 locations), or any object with a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
18 method (such as an open file or )

sep str, defaults to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
20 for ,
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
22 for

Delimiter to use. If sep is

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, . In addition, separators longer than 1 character and different from
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
26 will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example.
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
27

delimiter str, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Alternative argument name for sep

delim_whitespace boolean, default False

Specifies whether or not whitespace (e. g.

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
29 or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
30) will be used as the delimiter. Equivalent to setting
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
31. If this option is set to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32, nothing should be passed in for the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
33 parameter

Column and index locations and names

header int or list of ints, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
34

Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names. if no names are passed the behavior is identical to

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
35 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
36. Explicitly pass
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
35 to be able to replace existing names

The header can be a list of ints that specify row locations for a MultiIndex on the columns e. g.

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
38. Intervening rows that are not specified will be skipped (e. g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
39, so header=0 denotes the first line of data rather than the first line of the file

names array-like, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

List of column names to use. If file contains no header row, then you should explicitly pass

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
36. Duplicates in this list are not allowed

index_col int, str, sequence of int / str, or False, optional, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Column(s) to use as the row labels of the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43, either given as string name or column index. Nếu một chuỗi int / str được đưa ra, Multi Index được sử dụng

Note

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
44 can be used to force pandas to not use the first column as the index, e. g. when you have a malformed file with delimiters at the end of each line

The default value of

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 instructs pandas to guess. If the number of fields in the column header row is equal to the number of fields in the body of the data file, then a default index is used. If it is larger, then the first columns are used as index so that the remaining number of fields in the body are equal to the number of fields in the header

The first row after the header is used to determine the number of columns, which will go into the index. Nếu các hàng tiếp theo chứa ít cột hơn hàng đầu tiên, thì chúng được điền bằng

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

Điều này có thể tránh được thông qua

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47. Điều này đảm bảo rằng các cột được lấy nguyên trạng và dữ liệu theo sau bị bỏ qua

usecols giống như danh sách hoặc có thể gọi được, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Trả về một tập hợp con của các cột. Nếu giống như danh sách, tất cả các phần tử phải là vị trí (i. e. chỉ số số nguyên vào cột tài liệu) hoặc chuỗi tương ứng với tên cột do người dùng cung cấp trong

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49 hoặc được suy ra từ (các) hàng tiêu đề tài liệu. Nếu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49 được đưa ra, (các) hàng tiêu đề tài liệu không được tính đến. Ví dụ: tham số
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 giống như danh sách hợp lệ sẽ là
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
52 hoặc
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
53

Thứ tự phần tử bị bỏ qua, do đó,

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
54 giống như
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
55. Để khởi tạo một DataFrame từ
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
56 với thứ tự phần tử được giữ nguyên, hãy sử dụng
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
57 cho các cột theo thứ tự
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
58 hoặc
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
59 cho thứ tự
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
60

Nếu có thể gọi được, hàm có thể gọi được sẽ được đánh giá dựa trên tên cột, trả về các tên mà hàm có thể gọi được đánh giá là True

In [1]: import pandas as pd

In [2]: from io import StringIO

In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [4]: pd.read_csv(StringIO(data))
Out[4]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"])
Out[5]: 
  col1  col3
0    a     1
1    a     2
2    c     3

Sử dụng tham số này dẫn đến thời gian phân tích cú pháp nhanh hơn nhiều và sử dụng bộ nhớ thấp hơn khi sử dụng công cụ c. Công cụ Python tải dữ liệu trước khi quyết định bỏ cột nào

bóp boolean, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

Nếu dữ liệu được phân tích cú pháp chỉ chứa một cột thì hãy trả về

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62

Không dùng nữa kể từ phiên bản 1. 4. 0. Nối

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
63 vào lệnh gọi tới
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
64 để nén dữ liệu.

tiền tố str, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Tiền tố để thêm vào số cột khi không có tiêu đề, e. g. 'X' cho X0, X1, ...

Không dùng nữa kể từ phiên bản 1. 4. 0. Use a list comprehension on the DataFrame’s columns after calling

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1

mangle_dupe_cols boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

Duplicate columns will be specified as ‘X’, ‘X. 1’…’X. N’, rather than ‘X’…’X’. Passing in

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61 will cause data to be overwritten if there are duplicate names in the columns

Deprecated since version 1. 5. 0. The argument was never implemented, and a new argument where the renaming pattern can be specified will be added instead.

General parsing configuration

dtype Type name or dict of column -> type, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Data type for data or columns. E. g.

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
70 Use
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
15 or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72 together with suitable
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
73 settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion

New in version 1. 5. 0. Support for defaultdict was added. Specify a defaultdict as input where the default determines the dtype of the columns which are not explicitly listed.

engine {
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
74,
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
75,
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
76}

Parser engine to use. The C and pyarrow engines are faster, while the python engine is currently more feature-complete. Multithreading is currently only supported by the pyarrow engine

New in version 1. 4. 0. The “pyarrow” engine was added as an experimental engine, and some features are unsupported, or may not work correctly, with this engine.

converters dict, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Dict of functions for converting values in certain columns. Keys can either be integers or column labels

true_values list, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Values to consider as

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

false_values list, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Values to consider as

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

skipinitialspace boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

Skip spaces after delimiter

skiprows list-like or integer, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file

If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2

skipfooter int, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
84

Number of lines at bottom of file to skip (unsupported with engine=’c’)

nrows int, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Number of rows of file to read. Useful for reading pieces of large files

low_memory boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61, or specify the type with the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 parameter. Note that the entire file is read into a single
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 regardless, use the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
91 parameter to return the data in chunks. (Only valid with C parser)

memory_map boolean, default False

If a filepath is provided for

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
92, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead

NA and missing data handling

na_values scalar, str, list-like, or dict, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. See below for a list of the values interpreted as NaN by default

keep_default_na boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

Whether or not to include the default NaN values when parsing the data. Depending on whether

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
73 is passed in, the behavior is as follows

  • If

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    96 is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32, and
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 are specified,
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 is appended to the default NaN values used for parsing

  • If

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    96 is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32, and
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 are not specified, only the default NaN values are used for parsing

  • If

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    96 is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61, and
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 are specified, only the NaN values specified
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 are used for parsing

  • If

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    96 is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61, and
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    73 are not specified, no strings will be parsed as NaN

Note that if

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
10 is passed in as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61, the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
96 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
73 parameters will be ignored

na_filter boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
15 can improve the performance of reading a large file

verbose boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

Indicate number of NA values placed in non-numeric columns

skip_blank_lines boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32, skip over blank lines rather than interpreting as NaN values

Datetime handling

parse_dates boolean or list of ints or names or list of lists or dict, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61.
  • If

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32 -> try parsing the index

  • If

    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    21 -> try parsing columns 1, 2, 3 each as a separate date column

  • If

    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    22 -> combine columns 1 and 3 and parse as a single date column

  • If

    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    23 -> parse columns 1, 3 as date and call result ‘foo’

Note

A fast-path exists for iso8601-formatted dates

infer_datetime_format boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32 and parse_dates is enabled for a column, attempt to infer the datetime format to speed up the processing

keep_date_col boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32 and parse_dates specifies combining multiple columns then keep the original columns

date_parser function, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Function to use for converting a sequence of string columns to an array of datetime instances. The default uses

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
29 to do the conversion. pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs. 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments

dayfirst boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

DD/MM format dates, international and European format

cache_dates boolean, default True

If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets

New in version 0. 25. 0

Iteration

iterator boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

Return

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
32 object for iteration or getting chunks with
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
33

chunksize int, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Return

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
32 object for iteration. See below

Quoting, compression, and file format

compression {
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
34,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
37,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
38,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
39,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
40,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
41,
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
43}, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
34

For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip, bz2, zip, xz, or zstandard if

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
92 is path-like ending in ‘. gz’, ‘. bz2’, ‘. zip’, ‘. xz’, ‘. zst’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain only one data file to be read in. Đặt thành
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 để không giải nén. Can also be a dict with key
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
47 set to one of {
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
39,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
37,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
38,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
41} and other key-value pairs are forwarded to
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
52,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
53,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
54, or
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
55. As an example, the following could be passed for faster compression and to create a reproducible gzip archive.
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
56

Changed in version 1. 1. 0. dict option extended to support

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
57 and
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
58.

Changed in version 1. 2. 0. Previous versions forwarded dict entries for ‘gzip’ to

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
59.

thousands str, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Thousands separator

decimal str, default
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
61

Character to recognize as decimal point. E. g. use

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
20 for European data

float_precision string, default None

Specifies which converter the C engine should use for floating-point values. The options are

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 for the ordinary converter,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
64 for the high-precision converter, and
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
65 for the round-trip converter

lineterminator str (length 1), default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Character to break file into lines. Only valid with C parser

quotechar str (độ dài 1)

The character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored

quoting int or
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
67 instance, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
84

Control field quoting behavior per

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
67 constants. Use one of
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
70 (0),
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
71 (1),
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
72 (2) or
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
73 (3)

doublequote boolean, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

When

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
75 is specified and
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
76 is not
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
73, indicate whether or not to interpret two consecutive
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
75 elements inside a field as a single
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
75 element

escapechar str (length 1), default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

One-character string used to escape delimiter when quoting is

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
73

comment str, default
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
39), fully commented lines are ignored by the parameter
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 but not by
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
85. For example, if
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
86, parsing ‘#empty\na,b,c\n1,2,3’ with
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
35 will result in ‘a,b,c’ being treated as the header

mã hóa str, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Mã hóa để sử dụng cho UTF khi đọc/ghi (e. g.

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
89).

phương ngữ str hoặc ví dụ, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Nếu được cung cấp, thông số này sẽ ghi đè giá trị (mặc định hoặc không) cho các thông số sau.

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
33,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
93,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
94,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
95,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
75 và
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
76. Nếu cần ghi đè các giá trị, Cảnh báo phân tích cú pháp sẽ được đưa ra. Xem tài liệu để biết thêm chi tiết

xử lý lỗi

error_bad_lines boolean, tùy chọn, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Các dòng có quá nhiều trường (e. g. một dòng csv có quá nhiều dấu phẩy) theo mặc định sẽ gây ra một ngoại lệ và không có

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 nào được trả về. Nếu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61, thì những “dòng xấu” này sẽ bị loại bỏ khỏi
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 được trả về. Xem bên dưới

Không dùng nữa kể từ phiên bản 1. 3. 0. Tham số

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
03 nên được sử dụng thay thế để xác định hành vi khi gặp phải một dòng xấu thay thế.

warn_bad_lines boolean, tùy chọn, mặc định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24

Nếu error_bad_lines là

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61, vàWarrior_bad_lines là
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32, một cảnh báo cho mỗi “dòng xấu” sẽ được xuất ra

Không dùng nữa kể từ phiên bản 1. 3. 0. Tham số

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
03 nên được sử dụng thay thế để xác định hành vi khi gặp phải một dòng xấu thay thế.

on_bad_lines ('lỗi', 'cảnh báo', 'bỏ ​​qua'), 'lỗi' mặc định

Chỉ định những việc cần làm khi gặp phải một dòng xấu (một dòng có quá nhiều trường). Các giá trị được phép là

  • 'lỗi', tăng ParserError khi gặp phải một dòng xấu

  • ‘warn’, in cảnh báo khi gặp dòng xấu và bỏ qua dòng đó

  • 'bỏ qua', bỏ qua các dòng xấu mà không báo trước hoặc cảnh báo khi gặp phải

Mới trong phiên bản 1. 3. 0

Chỉ định kiểu dữ liệu cột

Bạn có thể chỉ định loại dữ liệu cho toàn bộ

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 hoặc các cột riêng lẻ

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object

May mắn thay, pandas cung cấp nhiều hơn một cách để đảm bảo rằng (các) cột của bạn chỉ chứa một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88. Nếu bạn không quen với những khái niệm này, bạn có thể xem để tìm hiểu thêm về dtypes và để tìm hiểu thêm về chuyển đổi
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72 trong pandas

Chẳng hạn, bạn có thể sử dụng đối số

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
11 của

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64

Hoặc bạn có thể sử dụng chức năng để ép buộc các dtypes sau khi đọc dữ liệu,

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64

sẽ chuyển đổi tất cả phân tích cú pháp hợp lệ thành float, để lại phân tích cú pháp không hợp lệ là

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

Cuối cùng, cách bạn xử lý việc đọc trong các cột có chứa các kiểu dữ liệu hỗn hợp tùy thuộc vào nhu cầu cụ thể của bạn. Trong trường hợp trên, nếu bạn muốn loại bỏ các điểm bất thường của dữ liệu, thì đó có lẽ là lựa chọn tốt nhất của bạn. However, if you wanted for all the data to be coerced, no matter the type, then using the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
11 argument of would certainly be worth trying

Note

In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with mixed dtypes. For example,

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')

will result with

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
19 containing an
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
20 dtype for certain chunks of the column, and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
15 for others due to the mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 of
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72, which is used for columns with mixed dtypes

Specifying categorical dtype

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 columns can be parsed directly by specifying
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
25 or
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
26

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object

Individual columns can be parsed as a

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 using a dict specification

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object

Specifying

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
25 will result in an unordered
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 whose
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
30 are the unique values observed in the data. For more control on the categories and order, create a
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
31 ahead of time, and pass that for that column’s
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object

When using

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
33, “unexpected” values outside of
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
34 are treated as missing values

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
0

This matches the behavior of

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
35

Note

With

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
25, the resulting categories will always be parsed as strings (object dtype). If the categories are numeric they can be converted using the function, or as appropriate, another converter such as

When

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 is a
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
31 with homogeneous
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
30 ( all numeric, all datetimes, etc. ), the conversion is done automatically

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
1

Naming and using columns

Handling column names

A file may or may not have a header row. pandas assumes the first row should be used as the column names

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
2

By specifying the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49 argument in conjunction with
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 you can indicate other names to use and whether or not to throw away the header row (if any)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
3

If the header is in a row other than the first, pass the row number to

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84. This will skip the preceding rows

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
4

Note

Default behavior is to infer the column names. if no names are passed the behavior is identical to

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
35 and column names are inferred from the first non-blank line of the file, if column names are passed explicitly then the behavior is identical to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
36

Duplicate names parsing

Deprecated since version 1. 5. 0.

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
47 was never implemented, and a new argument where the renaming pattern can be specified will be added instead.

If the file or header contains duplicate names, pandas will by default distinguish between them so as to prevent overwriting data

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
5

There is no more duplicate data because

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
48 by default, which modifies a series of duplicate columns ‘X’, …, ‘X’ to become ‘X’, ‘X. 1’, …, ‘X. N’

Filtering columns (
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47)

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 argument allows you to select any subset of the columns in a file, either using the column names, position numbers or a callable

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
6

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 argument can also be used to specify which columns not to use in the final result

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
7

In this case, the callable is specifying that we exclude the “a” and “c” columns from the output

Comments and empty lines

Ignoring line comments and empty lines

If the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
52 parameter is specified, then completely commented lines will be ignored. By default, completely blank lines will be ignored as well

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
8

If

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
53, then
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 will not ignore blank lines

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
9

Warning

The presence of ignored lines might create ambiguities involving line numbers; the parameter

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 uses row numbers (ignoring commented/empty lines), while
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
85 uses line numbers (including commented/empty lines)

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
0

If both

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 and
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
85 are specified,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 will be relative to the end of
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
85. For example

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
1

Comments

Sometimes comments or meta data may be included in a file

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
2

By default, the parser includes the comments in the output

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
3

We can suppress the comments using the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
52 keyword

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
4

Dealing with Unicode data

The

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
62 argument should be used for encoded unicode data, which will result in byte strings being decoded to unicode in the result

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
5

Một số định dạng mã hóa tất cả các ký tự dưới dạng nhiều byte, chẳng hạn như UTF-16, sẽ không phân tích cú pháp chính xác nếu không chỉ định mã hóa.

Index columns and trailing delimiters

If a file has one more column of data than the number of column names, the first column will be used as the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43’s row names

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
6

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
7

Ordinarily, you can achieve this behavior using the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 option

There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing the parser. To explicitly disable the index column inference and discard the last column, pass

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
44

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
8

If a subset of data is being parsed using the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 option, the
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 specification is based on that subset, not the original data

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
9

Date Handling

Specifying date columns

To better facilitate working with datetime data, uses the keyword arguments

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69 and
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
70 to allow users to specify a variety of columns and date/time formats to turn the input text data into
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
71 objects

The simplest case is to just pass in

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
72

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
0

It is often the case that we may want to store date and time data separately, or store various date fields separately. the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69 keyword can be used to specify a combination of columns to parse the dates and/or times from

You can specify a list of column lists to

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69, the resulting date columns will be prepended to the output (so as to not affect the existing column order) and the new column names will be the concatenation of the component column names

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
1

By default the parser removes the component date columns, but you can choose to retain them via the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
75 keyword

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
2

Lưu ý rằng nếu bạn muốn kết hợp nhiều cột thành một cột ngày, thì phải sử dụng danh sách lồng nhau. Nói cách khác,

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
76 chỉ ra rằng mỗi cột thứ hai và thứ ba phải được phân tích cú pháp thành các cột ngày riêng biệt trong khi
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
77 có nghĩa là hai cột phải được phân tích cú pháp thành một cột duy nhất

You can also use a dict to specify custom name columns

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
3

It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column is prepended to the data. The

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 specification is based off of this new set of columns rather than the original data columns

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
4

Note

If a column or index contains an unparsable date, the entire column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use after

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
80

Note

read_csv has a fast_path for parsing datetime strings in iso8601 format, e. g “2000-01-01T00. 01. 02+00. 00” and similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly faster, ~20x has been observed

Date parsing functions

Finally, the parser allows you to specify a custom

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
70 function to take full advantage of the flexibility of the date parsing API

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
5

gấu trúc sẽ cố gắng gọi hàm

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
70 theo ba cách khác nhau. If an exception is raised, the next one is tried

  1. In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    70 is first called with one or more arrays as arguments, as defined using
    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    69 (e. g. ,
    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    85)

  2. If #1 fails,

    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    70 is called with all the columns concatenated row-wise into a single array (e. g. ,
    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    87)

Note that performance-wise, you should try these methods of parsing dates in order

  1. Try to infer the format using

    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    88 (see section below)

  2. If you know the format, use

    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    89.
    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    90

  3. If you have a really non-standard format, use a custom

    In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    70 function. For optimal performance, this should be vectorized, i. e. , it should accept arrays as arguments

Parsing a CSV with mixed timezones

pandas cannot natively represent a column or index with mixed timezones. If your CSV file contains columns with a mixture of timezones, the default result will be an object-dtype column with strings, even with

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
6

To parse the mixed-timezone values as a datetime column, pass a partially-applied with

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
94 as the
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
70

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
7

Inferring datetime format

If you have

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69 enabled for some or all of your columns, and your datetime strings are all formatted the same way, you may get a large speed up by setting
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
88. If set, pandas will attempt to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that was guessed cannot properly parse the entire column of strings. So in general,
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
98 should not have any negative consequences if enabled

Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00. 00. 00)

  • “20111230”

  • “2011/12/30”

  • “20111230 00. 00. 00”

  • “12/30/2011 00. 00. 00”

  • “30/Dec/2011 00. 00. 00”

  • “30/December/2011 00. 00. 00”

Note that

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
98 is sensitive to
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
00. With
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
01, it will guess “01/12/2011” to be December 1st. With
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
02 (default) it will guess “01/12/2011” to be January 12th

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
8

International date formats

While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For convenience, a

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
00 keyword is provided

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
9

Writing CSVs to binary file objects

New in version 1. 2. 0

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
04 allows writing a CSV to a file object opened binary mode. In most cases, it is not necessary to specify
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
05 as Pandas will auto-detect whether the file object is opened in text or binary mode

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
0

Specifying method for floating-point conversion

The parameter

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
06 can be specified in order to use a specific floating-point converter during parsing with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter (which is guaranteed to round-trip values after writing to a file). For example

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
1

Thousand separators

For large numbers that have been written with a thousands separator, you can set the

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
07 keyword to a string of length 1 so that integers will be parsed correctly

By default, numbers with a thousands separator will be parsed as strings

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
2

The

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
07 keyword allows integers to be parsed correctly

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
3

NA values

To control which values are parsed as missing values (which are signified by

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46), specify a string in
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
73. If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
11, like
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
12 or an
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
13 like
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
14), the corresponding equivalent values will also imply a missing value (in this case effectively
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
15 are recognized as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46)

To completely override the default values that are recognized as missing, specify

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
17

The default

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46 recognized values are
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
19

Let us consider some examples

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
4

In the example above

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
14 and
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
12 will be recognized as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46, in addition to the defaults. A string will first be interpreted as a numerical
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
14, then as a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
5

Above, only an empty field will be recognized as

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
6

Above, both

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
26 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
84 as strings are
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
7

The default values, in addition to the string

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
29 are recognized as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

Infinity

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
31 like values will be parsed as
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
32 (positive infinity), and
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
33 as
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
34 (negative infinity). These will ignore the case of the value, meaning
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
35, will also be parsed as
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
32

Returning Series

Using the

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
37 keyword, the parser will return output with a single column as a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62

Deprecated since version 1. 4. 0. Users should append

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
63 to the DataFrame returned by
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 instead.

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
8

Boolean values

The common values

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32,
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61,
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
43, and
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
44 are all recognized as boolean. Occasionally you might want to recognize other values as being boolean. To do this, use the
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
45 and
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
46 options as follows

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
9

Handling “bad” lines

Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values filled in the trailing fields. Lines with too many fields will raise an error by default

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
0

You can elect to skip bad lines

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
1

Or pass a callable function to handle the bad line if

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
47. The bad line will be a list of strings that was split by the
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
48

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
2

You can also use the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 parameter to eliminate extraneous column data that appear in some lines but not others

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
3

Trong trường hợp bạn muốn giữ lại toàn bộ dữ liệu kể cả những dòng có quá nhiều trường, bạn có thể chỉ định đủ số lượng

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49. This ensures that lines with not enough fields are filled with
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
4

Dialect

The

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
52 keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but you can specify either the dialect name or a instance

Suppose you had data with unenclosed quotes

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
5

By default,

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 uses the Excel dialect and treats the double quote as the quote character, which causes it to fail when it finds a newline before it finds the closing double quote

We can get around this using

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
52

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
6

All of the dialect options can be specified separately by keyword arguments

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
7

Another common dialect option is

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
95, to skip any whitespace after a delimiter

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
8

The parsers make every attempt to “do the right thing” and not be fragile. Type inference is a pretty big deal. If a column can be coerced to integer dtype without altering the contents, the parser will do so. Any non-numeric columns will come through as object dtype as with the rest of pandas objects

Quoting and Escape Characters

Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use backslashes; to properly parse this data, you should pass the

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
94 option

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
9

Files with fixed width columns

While reads delimited data, the function works with data files that have known and fixed column widths. The function parameters to

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
60 are largely the same as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 with two extra parameters, and a different usage of the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
33 parameter

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    63. Một danh sách các cặp (bộ) đưa ra phạm vi của các trường có độ rộng cố định của mỗi dòng dưới dạng các khoảng thời gian nửa mở (i. e. , [từ, đến [ ). Giá trị chuỗi 'suy luận' có thể được sử dụng để hướng dẫn trình phân tích cú pháp thử phát hiện các thông số cột từ 100 hàng đầu tiên của dữ liệu. Hành vi mặc định, nếu không được chỉ định, là suy luận

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    64. Danh sách độ rộng trường có thể được sử dụng thay cho 'colspecs' nếu các khoảng liền kề nhau

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    33. Các ký tự được coi là ký tự phụ trong tệp có độ rộng cố định. Có thể được sử dụng để chỉ định ký tự điền của các trường nếu nó không phải là khoảng trắng (e. g. , ‘~’)

Xem xét một tệp dữ liệu có chiều rộng cố định điển hình

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
0

Để phân tích cú pháp tệp này thành một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43, chúng ta chỉ cần cung cấp thông số cột cho hàm
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
60 cùng với tên tệp

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
1

Note how the parser automatically picks column names X. when

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
36 argument is specified. Alternatively, you can supply just the column widths for contiguous columns:

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
2

The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the columns in the file

By default,

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
60 will try to infer the file’s
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
63 by using the first 100 rows of the file. It can do it only in cases when the columns are aligned and correctly separated by the provided
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
33 (default delimiter is whitespace)

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
3

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
60 supports the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 parameter for specifying the types of parsed columns to be different from the inferred type

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
4

Indexes

Files with an “implicit” index column

Consider a file with one less entry in the header than the number of data column

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
5

In this special case,

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 assumes that the first column is to be used as the index of the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
6

Note that the dates weren’t automatically parsed. In that case you would need to do as before

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
7

Reading an index with a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76

Suppose you have data indexed by two columns

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
8

The

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 argument to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 can take a list of column numbers to turn multiple columns into a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 for the index of the returned object

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
9

Reading columns with a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76

By specifying list of row locations for the

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84 argument, you can read in a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 for the columns. Specifying non-consecutive rows will skip the intervening rows

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
0

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 is also able to interpret a more common format of multi-columns indices

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
1

Note

If an

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 is not specified (e. g. you don’t have an index, or wrote it with
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
85, then any
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49 on the columns index will be lost

Tự động “đánh hơi” dấu phân cách

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the class of the csv module. For this, you have to specify
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
89

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
2

Reading multiple files to create a single DataFrame

It’s best to use to combine multiple files. See the for an example

Iterating through files chunk by chunk

Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory, such as the following

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
3

By specifying a

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66, the return value will be an iterable object of type
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
32

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
4

Changed in version 1. 2.

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
94 return a context-manager when iterating through a file.

Specifying

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
95 will also return the
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
32 object

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
5

Specifying the parser engine

Pandas currently supports three engines, the C engine, the python engine, and an experimental pyarrow engine (requires the

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
97 package). In general, the pyarrow engine is fastest on larger workloads and is equivalent in speed to the C engine on most other workloads. The python engine tends to be slower than the pyarrow and C engines on most workloads. However, the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the Python engine

Where possible, pandas uses the C parser (specified as

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
98), but it may fall back to Python if C-unsupported options are specified

Currently, options unsupported by the C and pyarrow engines include

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    48 other than a single character (e. g. regex separators)

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    00

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    89 with
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    02

Specifying any of the above options will produce a

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
03 unless the python engine is selected explicitly using
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
04

Options that are unsupported by the pyarrow engine which are not covered by the list above include

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    06

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    90

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    52

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    08

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    07

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    10

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    52

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    12

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    13

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    03

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    15

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    76

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    17

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    11

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    19

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    91

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    00

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    98

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    23

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    95

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    25

Specifying these options with

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
26 will raise a
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
27

Reading/writing remote files

You can pass in a URL to read or write remote files to many of pandas’ IO functions - the following example shows reading a CSV file

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
6

Mới trong phiên bản 1. 3. 0

A custom header can be sent alongside HTTP(s) requests by passing a dictionary of header key value mappings to the

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
28 keyword argument as shown below

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
7

All URLs which are not local files or HTTP(s) are handled by fsspec, if installed, and its various filesystem implementations (including Amazon S3, Google Cloud, SSH, FTP, webHDFS…). Some of these implementations will require additional packages to be installed, for example S3 URLs require the s3fs library

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
8

When dealing with remote storage systems, you might need extra configuration with environment variables or config files in special locations. For example, to access data in your S3 bucket, you will need to define credentials in one of the several ways listed in the . The same is true for several of the storage backends, and you should follow the links at for implementations built into

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
29 and for those not included in the main
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
29 distribution

You can also pass parameters directly to the backend driver. For example, if you do not have S3 credentials, you can still access public data by specifying an anonymous connection, such as

New in version 1. 2. 0

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
9

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
29 also allows complex URLs, for accessing data in compressed archives, local caching of files, and more. To locally cache the above example, you would modify the call to

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
0

where we specify that the “anon” parameter is meant for the “s3” part of the implementation, not to the caching implementation. Note that this caches to a temporary directory for the duration of the session only, but you can also specify a permanent store

Writing out data

Writing to CSV format

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 objects have an instance method
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
34 which allows storing the contents of the object as a comma-separated-values file. Hàm nhận một số đối số. Only the first is required

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    35. A string path to the file to write or a file object. If a file object it must be opened with
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    36

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    48 . Field delimiter for the output file (default “,”)

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    38. A string representation of a missing value (default ‘’)

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    39. Format string for floating point numbers

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    40. Columns to write (default None)

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    84. Whether to write out the column names (default True)

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42. whether to write row (index) names (default True)

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    43. Column label(s) for index column(s) if desired. If None (default), and
    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    84 and
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42 are True, then the index names are used. (A sequence should be given if the
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    43 uses MultiIndex)

  • In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    05 . Python write mode, default ‘w’

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    62. a string representing the encoding to use if the contents are non-ASCII, for Python versions prior to 3

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    17. Character sequence denoting line end (default
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    50)

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    76. Set quoting rules as in csv module (default csv. QUOTE_MINIMAL). Note that if you have set a
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    39 then floats are converted to strings and csv. QUOTE_NONNUMERIC will treat them as non-numeric

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    75. Character used to quote fields (default ‘”’)

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    93. Control quoting of
    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    75 in fields (default True)

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    94. Character used to escape
    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    48 and
    In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    75 when appropriate (default None)

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    90. Number of rows to write at a time

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    60. Format string for datetime objects

Viết một chuỗi định dạng

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 object has an instance method
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
62 which allows control over the string representation of the object. All arguments are optional

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    63 default None, for example a StringIO object

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    40 default None, which columns to write

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    65 default None, minimum width of each column

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    38 default
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    46, representation of NA value

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    68 default None, a dictionary (by column) of functions each of which takes a single argument and returns a formatted string

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    39 default None, a function which takes a single (float) argument and returns a formatted string; to be applied to floats in the
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    43

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    71 default True, set to False for a
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    43 with a hierarchical index to print every MultiIndex key at each row

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    73 default True, will print the names of the indices

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42 default True, will print the index (ie, row labels)

  • In [21]: data = "col_1\n1\n2\n'A'\n4.22"
    
    In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
    
    In [23]: df
    Out[23]: 
      col_1
    0     1
    1     2
    2   'A'
    3  4.22
    
    In [24]: df["col_1"].apply(type).value_counts()
    Out[24]: 
        4
    Name: col_1, dtype: int64
    
    84 default True, will print the column labels

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    76 default
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    77, will print column headers left- or right-justified

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62 object also has a
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
62 method, but with only the
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
63,
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
38,
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
39 arguments. There is also a
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
83 argument which, if set to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32, will additionally output the length of the Series

JSON

Read and write

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
85 format files and strings

Writing JSON

A

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62 or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 can be converted to a valid JSON string. Use
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
88 with optional parameters

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    35 . the pathname or buffer to write the output This can be
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    24 in which case a JSON string is returned

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    91

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    62
    • default is

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42

    • allowed values are {

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      94,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      95,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42}

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    43
    • default is

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      40

    • allowed values are {

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      94,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      95,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      40,
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      03,
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      04}

    The format of the JSON string

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    94

    dict like {index -> [index], columns -> [columns], data -> [values]}

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    95

    list like [{column -> value}, … , {column -> value}]

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42

    dict like {index -> {column -> value}}

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    40

    dict like {column -> {index -> value}}

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    03

    just the values array

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    04

    adhering to the JSON Table Schema

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    60 . string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    12 . The number of decimal places to use when encoding floating point values, default 10

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    13 . force encoded string to be ASCII, default True

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    14 . The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    15 . The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    16 . If
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    95 orient, then will write each record per line as json

Note

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
46’s,
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
19’s and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 will be converted to
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
21 and
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
71 objects will be converted based on the
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
60 and
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
14 parameters

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
1

Orient options

There are a number of different options for the format of the resulting JSON file / string. Consider the following

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
2

Column oriented (the default for

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43) serializes the data as nested JSON objects with column labels acting as the primary index

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
3

Index oriented (the default for

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62) similar to column oriented but the index labels are now primary

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
4

Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is useful for passing

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 data to plotting libraries, for example the JavaScript library
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
30

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
5

Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels are not included

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
6

Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also included for

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
7

Table oriented serializes to the JSON Table Schema, allowing for the preservation of metadata including but not limited to dtypes and index names

Note

Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels during round-trip serialization. If you wish to preserve label ordering use the

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
94 option as it uses ordered containers

Date handling

Writing in ISO date format

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
8

Writing in ISO date format, with microseconds

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
9

Epoch timestamps, in seconds

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
0

Writing to a file, with a date index and a date column

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
1

Fallback behavior

If the JSON serializer cannot handle the container contents directly it will fall back in the following manner

  • if the dtype is unsupported (e. g.

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    33) then the
    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    15, if provided, will be called for each value, otherwise an exception is raised

  • if an object is unsupported it will attempt the following

    • check if the object has defined a

      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      35 method and call it. A
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      35 method should return a
      In [21]: data = "col_1\n1\n2\n'A'\n4.22"
      
      In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
      
      In [23]: df
      Out[23]: 
        col_1
      0     1
      1     2
      2   'A'
      3  4.22
      
      In [24]: df["col_1"].apply(type).value_counts()
      Out[24]: 
          4
      Name: col_1, dtype: int64
      
      43 which will then be JSON serialized

    • invoke the

      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      15 if one was provided

    • convert the object to a

      In [21]: data = "col_1\n1\n2\n'A'\n4.22"
      
      In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})
      
      In [23]: df
      Out[23]: 
        col_1
      0     1
      1     2
      2   'A'
      3  4.22
      
      In [24]: df["col_1"].apply(type).value_counts()
      Out[24]: 
          4
      Name: col_1, dtype: int64
      
      43 by traversing its contents. However this will often fail with an
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      40 or give unexpected results

In general the best approach for unsupported objects or dtypes is to provide a

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
15. For example

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
2

can be dealt with by specifying a simple

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
15

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
3

Reading JSON

Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 if
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
44 is not supplied or is
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24. To explicitly force
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62 parsing, pass
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
47

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    92. a VALID JSON string or file handle / StringIO. The string could be a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be file . //localhost/path/to/table. json

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    44 . type of object to recover (series or frame), default ‘frame’

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    91

    Series
    • default is

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42

    • allowed values are {

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      94,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      95,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42}

    DataFrame
    • default is

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      40

    • allowed values are {

      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      94,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      95,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42,
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      40,
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      03,
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      04}

    The format of the JSON string

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    94

    dict like {index -> [index], columns -> [columns], data -> [values]}

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    95

    list like [{column -> value}, … , {column -> value}]

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42

    dict like {index -> {column -> value}}

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    40

    dict like {column -> {index -> value}}

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    03

    just the values array

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    04

    adhering to the JSON Table Schema

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    88 . if True, infer dtypes, if a dict of column to dtype, then use those, if
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61, then don’t infer dtypes at all, default is True, apply only to the data

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    70 . boolean, try to convert the axes to the proper dtypes, default is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    72 . a list of columns to parse for dates; If
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32, then try to parse date-like columns, default is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    75 . boolean, default
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    32. If parsing dates, then parse the default date-like columns

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    77 . direct decoding to NumPy arrays. default is
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61; Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if
    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    79

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    80 . boolean, default
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    61) is to use fast but less precise builtin functionality

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    14 . string, the timestamp unit to detect if converting dates. Default None. By default the timestamp precision will be detected, if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to seconds, milliseconds, microseconds or nanoseconds respectively

  • In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    16 . reads file as one json object per line

  • In [25]: df2 = pd.read_csv(StringIO(data))
    
    In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")
    
    In [27]: df2
    Out[27]: 
       col_1
    0   1.00
    1   2.00
    2    NaN
    3   4.22
    
    In [28]: df2["col_1"].apply(type).value_counts()
    Out[28]: 
        4
    Name: col_1, dtype: int64
    
    62 . The encoding to use to decode py3 bytes

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    90. when used in combination with
    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    87, return a JsonReader which reads in
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    90 lines per iteration

The parser will raise one of

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
89 if the JSON is not parseable

If a non-default

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
91 was used when encoding to JSON be sure to pass the same option here so that decoding produces sensible results, see for an overview

Chuyển đổi dữ liệu

The default of

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
91,
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
92, and
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
93 will try to parse the axes, and all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88.
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
70 should only be set to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61 if you need to preserve string-like numbers (e. g. ‘1’, ‘2’) in an axes

Note

Large integer values may be converted to dates if

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
93 and the data and / or column labels appear ‘date-like’. The exact threshold depends on the
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
14 specified. ‘date-like’ means that the column label meets one of the following criteria

  • it ends with

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    99

  • it ends with

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    00

  • nó bắt đầu bằng

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    01

  • nó là

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    02

  • nó là

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    03

Warning

When reading JSON data, automatic coercing into dtypes has some quirks

  • an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization

  • a column that was

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    11 data will be converted to
    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    13 if it can be done safely, e. g. a column of
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    06

  • bool columns will be converted to

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    13 on reconstruction

Thus there are times where you may want to specify specific dtypes via the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 keyword argument

Reading from a JSON string

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
4

Reading from a file

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
5

Don’t convert any data (but still convert axes and dates)

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
6

Specify dtypes for conversion

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
7

Preserve string indices

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
8

Dates written in nanoseconds need to be read back in nanoseconds

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
9

The Numpy parameter

Note

This param has been deprecated as of version 1. 0. 0 và sẽ tăng một

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
09

This supports numeric data only. Index and columns labels may be non-numeric, e. g. strings, dates etc

If

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
79 is passed to
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
11 an attempt will be made to sniff an appropriate dtype during deserialization and to subsequently decode directly to NumPy arrays, bypassing the need for intermediate Python objects

This can provide speedups if you are deserialising a large amount of numeric data

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
00

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
01

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
02

The speedup is less noticeable for smaller datasets

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
03

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
04

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
05

Warning

Direct NumPy decoding makes a number of assumptions and may fail or produce unexpected output if these assumptions are not satisfied

  • data is numeric

  • data is uniform. The dtype is sniffed from the first value decoded. A

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    27 may be raised, or incorrect output may be produced if this condition is not satisfied

  • labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row / column has been encoded in the same order. This should be satisfied if the data was encoded using

    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    88 but may not be the case if the JSON is from another source

Normalization

pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data into a flat table

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
06

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
07

The max_level parameter provides more control over which level to end normalization. With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
08

Line delimited json

pandas is able to read and write line-delimited json files that are common in data processing pipelines using Hadoop or Spark

For line-delimited json files, pandas can also return an iterator which reads in

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 lines at a time. This can be useful for large files or to read from a stream

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
09

Table schema

Table Schema is a spec for describing tabular datasets as a JSON object. The JSON includes information on the field names, types, and other attributes. You can use the orient

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 to build a JSON string with two fields,
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
16 and
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
56

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
10

The

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
16 field contains the
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
19 key, which itself contains a list of column name to type pairs, including the
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
20 or
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 (see below for a list of types). The
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
16 field also contains a
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
23 field if the (Multi)index is unique

The second field,

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
56, contains the serialized data with the
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
95 orient. The index is included, and any datetimes are ISO 8601 formatted, as required by the Table Schema spec

The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas types

pandas type

Table Schema type

int64

integer

float64

number

bool

boolean

datetime64[ns]

datetime

timedelta64[ns]

duration

categorical

any

object

str

A few notes on the generated table schema

  • The

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    16 object contains a
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    27 field. This contains the version of pandas’ dialect of the schema, and will be incremented with each revision

  • All dates are converted to UTC when serializing. Even timezone naive values, which are treated as UTC with an offset of 0

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    11

  • datetimes with a timezone (before serializing), include an additional field

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    28 with the time zone name (e. g.
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    29)

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    12

  • Periods are converted to timestamps before serialization, and so have the same behavior of being converted to UTC. In addition, periods will contain and additional field

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    30 with the period’s frequency, e. g.
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    31

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    13

  • Categoricals use the

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    32 type and an
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    33 constraint listing the set of possible values. Ngoài ra, một trường
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    34 được bao gồm

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    14

  • A

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    23 field, containing an array of labels, is included if the index is unique

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    15

  • The

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    23 behavior is the same with MultiIndexes, but in this case the
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    23 is an array

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    16

  • The default naming roughly follows these rules

    • For series, the

      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      38 is used. If that’s none, then the name is
      In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
      Out[39]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      03

    • For

      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      40, the stringified version of the column name is used

    • For

      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      20 (not
      In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
      
      In [30]: df = pd.DataFrame({"col_1": col_1})
      
      In [31]: df.to_csv("foo.csv")
      
      In [32]: mixed_df = pd.read_csv("foo.csv")
      
      In [33]: mixed_df["col_1"].apply(type).value_counts()
      Out[33]: 
          737858
          262144
      Name: col_1, dtype: int64
      
      In [34]: mixed_df["col_1"].dtype
      Out[34]: dtype('O')
      
      76),
      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      43 is used, with a fallback to
      In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
      
      In [36]: pd.read_csv(StringIO(data))
      Out[36]: 
        col1 col2  col3
      0    a    b     1
      1    a    b     2
      2    c    d     3
      
      In [37]: pd.read_csv(StringIO(data)).dtypes
      Out[37]: 
      col1    object
      col2    object
      col3     int64
      dtype: object
      
      In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
      Out[38]: 
      col1    category
      col2    category
      col3    category
      dtype: object
      
      42 if that is None

    • For

      In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
      
      In [30]: df = pd.DataFrame({"col_1": col_1})
      
      In [31]: df.to_csv("foo.csv")
      
      In [32]: mixed_df = pd.read_csv("foo.csv")
      
      In [33]: mixed_df["col_1"].apply(type).value_counts()
      Out[33]: 
          737858
          262144
      Name: col_1, dtype: int64
      
      In [34]: mixed_df["col_1"].dtype
      Out[34]: dtype('O')
      
      76,
      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      46 is used. If any level has no name, then
      In [40]: from pandas.api.types import CategoricalDtype
      
      In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
      
      In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
      Out[42]: 
      col1    category
      col2      object
      col3       int64
      dtype: object
      
      47 is used

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
11 also accepts
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
49 as an argument. This allows for the preservation of metadata such as dtypes and index names in a round-trippable manner

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
17

Please note that the literal string ‘index’ as the name of an is not round-trippable, nor are any names beginning with

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
51 within a . These are used by default in to indicate missing values and the subsequent read cannot distinguish the intent

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
18

When using

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
49 along with user-defined
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
55, the generated schema will contain an additional
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
56 key in the respective
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
19 element. This extra key is not standard but does enable JSON roundtrips for extension types (e. g.
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
58)

The

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
56 key carries the name of the extension, if you have properly registered the
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
60, pandas will use said name to perform a lookup into the registry and re-convert the serialized data into your custom dtype

HTML

Reading HTML content

Warning

We highly encourage you to read the below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers

The top-level

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
61 function can accept an HTML string/file/URL and will parse HTML tables into list of pandas
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
40. Let’s look at a few examples

Note

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
63 returns a
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
64 of
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 objects, even if there is only a single table contained in the HTML content

Read a URL with no options

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
19

Note

The data from the above URL changes every Monday so the resulting data above may be slightly different

Read in the content of the file from the above URL and pass it to

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
63 as a string

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
20

You can even pass in an instance of

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
11 if you so desire

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
21

Note

The following examples are not run by the IPython evaluator due to the fact that having so many network-accessing functions slows down the documentation build. If you spot an error or an example that doesn’t run, please do not hesitate to report it over on pandas GitHub issues page

Read a URL and match a table that contains specific text

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
22

Specify a header row (by default

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
68 or
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
69 elements located within a
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
70 are used to form the column index, if multiple rows are contained within
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
70 then a MultiIndex is created); if specified, the header row is taken from the data minus the parsed header elements (
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
68 elements)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
23

Specify an index column

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
24

Specify a number of rows to skip

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
25

Specify a number of rows to skip using a list (

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
73 works as well)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
26

Chỉ định một thuộc tính HTML

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
27

Chỉ định các giá trị sẽ được chuyển đổi thành NaN

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
28

Specify whether to keep the default set of NaN values

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
29

Specify converters for columns. This is useful for numerical text data that has leading zeros. By default columns that are numerical are cast to numeric types and the leading zeros are lost. To avoid this, we can convert these columns to strings

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
30

Use some combination of the above

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
31

Read in pandas

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
74 output (with some loss of floating point precision)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
32

The

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
75 backend will raise an error on a failed parse if that is the only parser you provide. If you only have a single parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the function expects a sequence of strings. You may use

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
33

Or you could pass

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
76 without a list

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
34

However, if you have bs4 and html5lib installed and pass

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 or
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
78 then the parse will most likely succeed. Note that as soon as a parse succeeds, the function will return

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
35

Links can be extracted from cells along with the text using

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
79

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
36

New in version 1. 5. 0

Writing to HTML files

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 objects have an instance method
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
74 which renders the contents of the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 as an HTML table. The function arguments are as in the method
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
62 described above

Note

Not all of the possible options for

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
84 are shown here for brevity’s sake. See
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
85 for the full set of options

Note

In an HTML-rendering supported environment like a Jupyter Notebook,

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
86 will render the raw HTML into the environment

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
37

The

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
40 argument will limit the columns shown

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
38

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
39 takes a Python callable to control the precision of floating point values

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
39

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
89 will make the row labels bold by default, but you can turn that off

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
40

The

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
90 argument provides the ability to give the resulting HTML table CSS classes. Note that these classes are appended to the existing
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
91 class

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
41

The

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
92 argument provides the ability to add hyperlinks to cells that contain URLs

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
42

Finally, the

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
93 argument allows you to control whether the “<”, “>” and “&” characters escaped in the resulting HTML (by default it is
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32). So to get the HTML without escaped characters pass
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
95

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
43

Escaped

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
44

Not escaped

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
45

Note

Some browsers may not show a difference in the rendering of the previous two HTML tables

HTML Table Parsing Gotchas

There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas io function

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
63

Issues with lxml

  • Benefits

    • lxml is very fast

    • lxml requires Cython to install correctly

  • Drawbacks

    • lxml does not make any guarantees about the results of its parse unless it is given

    • In light of the above, we have chosen to allow you, the user, to use the lxml backend, but this backend will use html5lib if lxml fails to parse

    • It is therefore highly recommended that you install both BeautifulSoup4 and html5lib, so that you will still get a valid result (provided everything else is valid) even if lxml fails

Issues with BeautifulSoup4 using lxml as a backend

  • The above issues hold here as well since BeautifulSoup4 is essentially just a wrapper around a parser backend

Issues with BeautifulSoup4 using html5lib as a backend

  • Benefits

    • html5lib is far more lenient than lxml and consequently deals with real-life markup in a much saner way rather than just, e. g. , dropping an element without notifying you

    • html5lib generates valid HTML5 markup from invalid markup automatically. This is extremely important for parsing HTML tables, since it guarantees a valid document. However, that does NOT mean that it is “correct”, since the process of fixing markup does not have a single definition

    • html5lib is pure Python and requires no additional build steps beyond its own installation

  • Drawbacks

    • The biggest drawback to using html5lib is that it is slow as molasses. However consider the fact that many tables on the web are not big enough for the parsing algorithm runtime to matter. It is more likely that the bottleneck will be in the process of reading the raw text from the URL over the web, i. e. , IO (input-output). For very large tables, this might not be true

LaTeX

Mới trong phiên bản 1. 3. 0

Currently there are no methods to read from LaTeX, only output methods

Writing to LaTeX files

Note

DataFrame and Styler objects currently have a

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
97 method. We recommend using the Styler. to_latex() method over DataFrame. to_latex() due to the former’s greater flexibility with conditional styling, and the latter’s possible future deprecation.

Review the documentation for Styler. to_latex , which gives examples of conditional styling and explains the operation of its keyword arguments.

For simple application the following pattern is sufficient

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
46

To format values before output, chain the Styler. format method.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
47

XML

Đọc XML

Mới trong phiên bản 1. 3. 0

The top-level

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
98 function can accept an XML string/file/URL and will parse nodes and attributes into a pandas
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43

Note

Since there is no standard XML structure where design types can vary in many ways,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
000 works best with flatter, shallow versions. If an XML document is deeply nested, use the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
001 feature to transform XML into a flatter version

Let’s look at a few examples

Read an XML string

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
48

Read a URL with no options

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
49

Read in the content of the “books. xml” file and pass it to

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
000 as a string

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
50

Read in the content of the “books. xml” as instance of

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
11 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
004 and pass it to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
000

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
51

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
52

Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing Biomedical and Life Science Jorurnals

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
53

With lxml as default

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
006, you access the full-featured XML library that extends Python’s ElementTree API. One powerful tool is ability to query nodes selectively or conditionally with more expressive XPath

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
54

Specify only elements or only attributes to parse

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
55

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
56

XML documents can have namespaces with prefixes and default namespaces without prefixes both of which are denoted with a special attribute

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
007. In order to parse by node under a namespace context,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
008 must reference a prefix

For example, below XML contains a namespace with prefix,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
009, and URI at
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
010. In order to parse
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
011 nodes,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
012 must be used

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
57

Similarly, an XML document can have a default namespace without prefix. Failing to assign a temporary prefix will return no nodes and raise a

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
27. But assigning any temporary name to correct URI allows parsing by nodes

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
58

However, if XPath does not reference node names such as default,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
014, then
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
012 is not required

With lxml as parser, you can flatten nested XML documents with an XSLT script which also can be string/file/URL types. As background, XSLT is a special-purpose language written in a special XML file that can transform original XML documents into other XML, HTML, even text (CSV, JSON, etc. ) using an XSLT processor

For example, consider this somewhat nested structure of Chicago “L” Rides where station and rides elements encapsulate data in their own sections. With below XSLT,

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
75 can transform original nested document into a flatter output (as shown below for demonstration) for easier parse into
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
59

For very large XML files that can range in hundreds of megabytes to gigabytes, supports parsing such sizeable files using and which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes. without holding entire tree in memory

New in version 1. 5. 0

To use this feature, you must pass a physical XML file path into

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
000 and use the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
020 argument. Files should not be compressed or point to online sources but stored on local disk. Also,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
020 should be a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of any element or attribute that is a descendant (i. e. , con, cháu) của nút lặp. Vì XPath không được sử dụng trong phương pháp này, nên các hậu duệ không cần chia sẻ cùng mối quan hệ với nhau. Below shows example of reading in Wikipedia’s very large (12 GB+) latest article data dump

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
60

Writing XML

Mới trong phiên bản 1. 3. 0

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 objects have an instance method
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
023 which renders the contents of the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 as an XML document

Note

This method does not support special properties of XML including DTD, CData, XSD schemas, processing instructions, comments, and others. Only namespaces at the root level is supported. However,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
001 allows design changes after initial output

Let’s look at a few examples

Write an XML without options

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
61

Write an XML with new root and row name

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
62

Write an attribute-centric XML

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
63

Viết hỗn hợp các phần tử và thuộc tính

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
64

Any

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
40 with hierarchical columns will be flattened for XML element names with levels delimited by underscores

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
65

Write an XML with default namespace

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
66

Write an XML with namespace prefix

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
67

Write an XML without declaration or pretty print

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
68

Write an XML and transform with stylesheet

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
69

XML Final Notes

  • All XML documents adhere to W3C specifications. Both

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    027 and
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    75 parsers will fail to parse any markup document that is not well-formed or follows XML syntax rules. Xin lưu ý rằng HTML không phải là một tài liệu XML trừ khi nó tuân theo các thông số kỹ thuật của XHTML. However, other popular markup types including KML, XAML, RSS, MusicML, MathML are compliant XML schemas

  • For above reason, if your application builds XML prior to pandas operations, use appropriate DOM libraries like

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    027 and
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    75 to build the necessary document and not by string concatenation or regex adjustments. Always remember XML is a special text file with markup rules

  • With very large XML files (several hundred MBs to GBs), XPath and XSLT can become memory-intensive operations. Be sure to have enough available RAM for reading and writing to large XML files (roughly about 5 times the size of text)

  • Because XSLT is a programming language, use it with caution since such scripts can pose a security risk in your environment and can run large or infinite recursive operations. Always test scripts on small fragments before full run

  • The etree parser supports all functionality of both

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    000 and
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    023 except for complex XPath and any XSLT. Though limited in features,
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    027 is still a reliable and capable parser and tree builder. Its performance may trail
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    75 to a certain degree for larger files but relatively unnoticeable on small to medium size files

Excel files

The method can read Excel 2007+ (

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036) files using the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
037 Python module. Excel 2003 (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038) files can be read using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
039. Binary Excel (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
040) files can be read using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
041. The instance method is used for saving a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 to Excel. Generally the semantics are similar to working with data. See the for some advanced strategies

Warning

The xlwt package for writing old-style

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038 excel files is no longer maintained. The xlrd package is now only for reading old-style
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038 files

Before pandas 1. 3. 0, the default argument

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
046 to would result in using the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
039 engine in many cases, including new Excel 2007+ (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036) files. pandas will now default to using the openpyxl engine

It is strongly encouraged to install

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
037 to read Excel 2007+ (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036) files. Please do not report issues when using ``xlrd`` to read ``. xlsx`` files. This is no longer supported, switch to using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
037 instead

Attempting to use the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
053 engine will raise a
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
09 unless the option
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
055 is set to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
056. While this option is now deprecated and will also raise a
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
09, it can be globally set and the warning suppressed. Users are recommended to write
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036 files using the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
037 engine instead

Reading Excel files

In the most basic use-case,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 takes a path to an Excel file, and the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
061 indicating which sheet to parse

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
70

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
062 class

To facilitate working with multiple sheets from the same file, the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
062 class can be used to wrap the file and can be passed into
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 There will be a performance benefit for reading multiple sheets as the file is read into memory only once

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
71

The

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
062 class can also be used as a context manager

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
72

The

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
066 property will generate a list of the sheet names in the file

The primary use-case for an

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
062 is parsing multiple sheets with different parameters

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
73

Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 with no loss in performance

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
74

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
062 can also be called with a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
070 object as a parameter. This allows the user to control how the excel file is read. For example, sheets can be loaded on demand by calling
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
071 with
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
072

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
75

Specifying sheets

Note

The second argument is

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
061, not to be confused with
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
074

Note

An ExcelFile’s attribute

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
066 provides access to a list of sheets

  • The arguments

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    061 allows specifying the sheet or sheets to read

  • The default value for

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    061 is 0, indicating to read the first sheet

  • Pass a string to refer to the name of a particular sheet in the workbook

  • Pass an integer to refer to the index of a sheet. Indices follow Python convention, beginning at 0

  • Pass a list of either strings or integers, to return a dictionary of specified sheets

  • Pass a

    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    24 to return a dictionary of all available sheets

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
76

Using the sheet index

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
77

Using all default values

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
78

Using None to get all sheets

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
79

Using a list to get multiple sheets

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
80

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 can read more than one sheet, by setting
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
061 to either a list of sheet names, a list of sheet positions, or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 to read all sheets. Sheets can be specified by sheet index or sheet name, using an integer or string, respectively

Reading a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 can read a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 index, by passing a list of columns to
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 and a
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 column by passing a list of rows to
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84. If either the
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
42 or
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
40 have serialized level names those will be read in as well by specifying the rows/columns that make up the levels

For example, to read in a

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 index without names

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
81

If the index has level names, they will parsed as well, using the same parameters

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
82

If the source file has both

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 index and columns, lists specifying each should be passed to
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 and
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
84

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
83

Các giá trị còn thiếu trong các cột được chỉ định trong

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64 sẽ được điền tiếp để cho phép thực hiện quay vòng với
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
095 cho
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
096. To avoid forward filling the missing values use
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
097 after reading the data instead of
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
64

Parsing specific columns

It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read in those columns.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
060 takes a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 keyword to allow you to specify a subset of columns to parse

Changed in version 1. 0. 0

Passing in an integer for

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 will no longer work. Please pass in a list of ints from 0 to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 inclusive instead

You can specify a comma-delimited set of Excel columns and ranges as a string

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
84

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 is a list of integers, then it is assumed to be the file column indices to be parsed

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
85

Element order is ignored, so

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
54 is the same as
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
55

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 is a list of strings, it is assumed that each string corresponds to a column name provided either by the user in
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
49 or inferred from the document header row(s). Those strings define which columns will be parsed

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
86

Element order is ignored, so

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
108 is the same as
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
109

If

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 is callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
87

Parsing dates

Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
69 keyword to parse those strings to datetimes

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
88

Cell converters

It is possible to transform the contents of Excel cells via the

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
11 option. For instance, to convert a column to boolean

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
89

This options handles missing values and treats exceptions in the converters as missing data. Transformations are applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float. You can manually mask missing data to recover integer dtype

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
90

Dtype specifications

As an alternative to converters, the type for an entire column can be specified using the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88 keyword, which takes a dictionary mapping column names to types. To interpret data with no type inference, use the type
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
15 or
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
91

Writing Excel files

Writing Excel files to disk

To write a

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 object to a sheet of an Excel file, you can use the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
095 instance method. The arguments are largely the same as
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
34 described above, the first argument being the name of the excel file, and the optional second argument the name of the sheet to which the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 should be written. For example

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
92

Files with a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038 extension will be written using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
053 and those with a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036 extension will be written using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
124 (if available) or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
037

The

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 will be written in a way that tries to mimic the REPL output. The
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
43 will be placed in the second row instead of the first. You can place it in the first row by setting the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
128 option in
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
042 to
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
93

In order to write separate

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
40 to separate sheets in a single Excel file, one can pass an
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
132

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
94

Writing Excel files to memory

pandas supports writing Excel files to buffer-like objects such as

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
11 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
004 using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
132

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
95

Note

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
136 is optional but recommended. Setting the engine determines the version of workbook produced. Setting
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
137 will produce an Excel 2003-format workbook (xls). Using either
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
138 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
139 will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook is produced

Excel writer engines

Deprecated since version 1. 2. 0. As the xlwt package is no longer maintained, the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
053 engine will be removed from a future version of pandas. Đây là công cụ duy nhất trong gấu trúc hỗ trợ ghi vào tệp
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038.

gấu trúc chọn một trình soạn thảo Excel thông qua hai phương pháp

  1. đối số từ khóa

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    136

  2. the filename extension (via the default specified in config options)

By default, pandas uses the XlsxWriter for

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036, openpyxl for
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
144, and xlwt for
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
038 files. If you have multiple engines installed, you can set the default engine through
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
146 and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
055. pandas will fall back on openpyxl for
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
036 files if Xlsxwriter is not available

To specify which writer you want to use, you can pass an engine keyword argument to

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
095 and to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
132. The built-in engines are

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    037. version 2. 4 or higher is required

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    124

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    053

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
96

Style and formatting

The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43’s
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
095 method

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    39 . Format string for floating point numbers (default
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    24)

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    158 . A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    24)

Using the Xlsxwriter engine provides many options for controlling the format of an Excel worksheet created with the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
095 method. Excellent examples can be found in the Xlsxwriter documentation here. https. //xlsxwriter. readthedocs. io/working_with_pandas. html

OpenDocument Spreadsheets

New in version 0. 25

The method can also read OpenDocument spreadsheets using the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
162 module. The semantics and features for reading OpenDocument spreadsheets match what can be done for using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
163

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
97

Note

Currently pandas only supports reading OpenDocument spreadsheets. Writing is not implemented

Binary Excel (. xlsb) files

New in version 1. 0. 0

The method can also read binary Excel files using the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
041 module. The semantics and features for reading binary Excel files mostly match what can be done for using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
166.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
041 does not recognize datetime types in files and will return floats instead

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
98

Note

Currently pandas only supports reading binary Excel files. Writing is not implemented

Clipboard

A handy way to grab data is to use the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
168 method, which takes the contents of the clipboard buffer and passes them to the
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 method. For instance, you can copy the following text to the clipboard (CTRL-C on many operating systems)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
99

And then import the data directly to a

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 by calling

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
00

The

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
171 method can be used to write the contents of a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 to the clipboard. Following which you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate writing a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 into clipboard and reading it back

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
01

We can see that we got the same content back, which we had earlier written to the clipboard

Note

You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods

Pickling

All pandas objects are equipped with

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
174 methods which use Python’s
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
175 module to save data structures to disk using the pickle format

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
02

The

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
176 function in the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
177 namespace can be used to load any pickled pandas object (or any other pickled object) from file

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
03

Warning

Loading pickled data received from untrusted sources can be unsafe

See. https. //docs. python. org/3/library/pickle. html

Warning

is only guaranteed backwards compatible back to pandas version 0. 20. 3

Compressed pickle files

, and can read and write compressed pickle files. The compression types of

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
57,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
58,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
184,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
185 are supported for reading and writing. The
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
186 file format only supports reading and must contain only one data file to be read

The compression type can be an explicit parameter or be inferred from the file extension. If ‘infer’, then use

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
57,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
58,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
186,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
184,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
185 if filename ends in
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
192,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
193,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
194,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
195, or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
196, respectively

The compression parameter can also be a

In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
43 in order to pass options to the compression protocol. It must have a
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
47 key set to the name of the compression protocol, which must be one of {
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
39,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
37,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
38,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
40,
In [21]: data = "col_1\n1\n2\n'A'\n4.22"

In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str})

In [23]: df
Out[23]: 
  col_1
0     1
1     2
2   'A'
3  4.22

In [24]: df["col_1"].apply(type).value_counts()
Out[24]: 
    4
Name: col_1, dtype: int64
41}. All other key-value pairs are passed to the underlying compression library

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
04

Using an explicit compression type

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
05

Inferring compression type from the extension

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
06

The default is to ‘infer’

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
07

Passing options to the compression protocol in order to speed up compression

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
08

gói thông điệp

pandas support for

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
204 has been removed in version 1. 0. 0. It is recommended to use instead

Alternatively, you can also the Arrow IPC serialization format for on-the-wire transmission of pandas objects. For documentation on pyarrow, see here

HDF5 (PyTables)

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 is a dict-like object which reads and writes pandas using the high performance HDF5 format using the excellent PyTables library. See the for some advanced strategies

Warning

pandas uses PyTables for reading and writing HDF5 files, which allows serializing object-dtype data with pickle. Loading pickled data received from untrusted sources can be unsafe

See. https. //docs. python. org/3/library/pickle. html for more

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
09

Objects can be written to the file just like adding key-value pairs to a dict

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
10

In a current or later Python session, you can retrieve stored objects

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
11

Deletion of the object specified by the key

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
12

Closing a Store and using a context manager

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
13

Read/write API

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 supports a top-level API using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
207 for reading and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
208 for writing, similar to how
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
66 and
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
34 work

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
14

HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
211

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
15

Fixed format

The examples above show storing using

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
212, which write the HDF5 to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 in a fixed array format, called the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
214 format. These types of stores are not appendable once written (though you can simply remove them and rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with non-unique column names. The
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
214 format stores offer very fast writing and slightly faster reading than
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 stores. This format is specified by default when using
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
212 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
208 or by
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
219 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
220

Warning

A

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
214 format will raise a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
222 if you try to retrieve using a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
16

Table format

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 supports another
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 format on disk, the
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 format. Conceptually a
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 is shaped very much like a DataFrame, with rows and columns. Một
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 có thể được thêm vào trong cùng một phiên hoặc các phiên khác. In addition, delete and query type operations are supported. This format is specified by
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
229 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
230 to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
231 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
212 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
208

Định dạng này cũng có thể được đặt làm tùy chọn

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
234 để cho phép
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
235 lưu trữ theo mặc định ở định dạng
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
17

Note

You can also create a

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 by passing
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
229 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
230 to a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
212 operation

Hierarchical keys

Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e. g.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
241), which will generate a hierarchy of sub-stores (or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
242 in PyTables parlance). Keys can be specified without the leading ‘/’ and are always absolute (e. g. ‘foo’ refers to ‘/foo’). Removal operations can remove everything in the sub-store and below, so be careful

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
18

You can walk through the group hierarchy using the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
243 method which will yield a tuple for each group key along with the relative keys of its contents

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
19

Warning

Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
20

Thay vào đó, hãy sử dụng các khóa dựa trên chuỗi rõ ràng

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
21

lưu trữ các loại

Lưu trữ các loại hỗn hợp trong một bảng

Lưu trữ dữ liệu hỗn hợp dtype được hỗ trợ. Các chuỗi được lưu trữ dưới dạng chiều rộng cố định bằng cách sử dụng kích thước tối đa của cột được nối thêm. Những nỗ lực tiếp theo trong việc nối thêm các chuỗi dài hơn sẽ tăng

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
27

Chuyển

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
245 làm tham số để nối thêm sẽ đặt giá trị tối thiểu lớn hơn cho các cột chuỗi. Lưu trữ
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
246 hiện được hỗ trợ. Đối với các cột chuỗi, việc chuyển
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
247 để nối thêm sẽ thay đổi cách biểu diễn nan mặc định trên đĩa (chuyển đổi thành/từ
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
248), giá trị này mặc định thành ____57_______249

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
22

Lưu trữ khung dữ liệu MultiIndex

Lưu trữ MultiIndex

In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
40 dưới dạng bảng rất giống với lưu trữ/chọn từ chỉ mục đồng nhất
In [40]: from pandas.api.types import CategoricalDtype

In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)

In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
Out[42]: 
col1    category
col2      object
col3       int64
dtype: object
40

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
23

Note

Từ khóa

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
42 được bảo lưu và không thể được sử dụng làm tên cấp độ

truy vấn

Truy vấn một bảng

Các phép toán

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
253 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
254 có một tiêu chí tùy chọn có thể được chỉ định để chỉ chọn/xóa một tập hợp con của dữ liệu. Điều này cho phép một người có một bảng trên đĩa rất lớn và chỉ truy xuất một phần dữ liệu

Một truy vấn được chỉ định bằng cách sử dụng lớp

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
255, dưới dạng biểu thức boolean

  • In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42 và
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    40 là các bộ chỉ mục được hỗ trợ của
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    40

  • nếu

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    259 được chỉ định, chúng có thể được sử dụng làm chỉ mục bổ sung

  • tên cấp độ trong MultiIndex, với tên mặc định

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    260,
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    261, … nếu không được cung cấp

Toán tử so sánh hợp lệ là

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
262

Valid boolean expressions are combined with

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    263 . or

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    264 . and

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    265 and
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    266 . for grouping

These rules are similar to how boolean expressions are used in pandas for indexing

Note

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    267 will be automatically expanded to the comparison operator
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    268

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    269 is the not operator, but can only be used in very limited circumstances

  • If a list/tuple of expressions is passed they will be combined via

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    264

The following are valid expressions

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    271

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    272

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    273

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    274

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    275

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    276

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    277

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    278

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    279

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    280

The

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
281 are on the left-hand side of the sub-expression

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
40,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
283,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
284

The right-hand side of the sub-expression (after a comparison operator) can be

  • functions that will be evaluated, e. g.

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    285

  • strings, e. g.

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    286

  • date-like, e. g.

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    287, or
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    288

  • lists, e. g.

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    289

  • variables that are defined in the local names space, e. g.

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    290

Note

Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the string of interest to a variable and use that variable in an expression. For example, do this

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
24

instead of this

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
25

The latter will not work and will raise a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
291. Note that there’s a single quote followed by a double quote in the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
292 variable

If you must interpolate, use the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
293 format specifier

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
26

which will quote

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
292

Here are some examples

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
27

Use boolean expressions, with in-line function evaluation

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
28

Use inline column reference

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
29

The

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
40 keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
296

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
30

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
297 and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
298 parameters can be specified to limit the total search space. These are in terms of the total number of rows in a table

Note

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
253 will raise a
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
27 if the query expression has an unknown variable reference. Usually this means that you are trying to select on a column that is not a data_column

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
253 will raise a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
291 if the query expression is not valid

Query timedelta64[ns]

Bạn có thể lưu trữ và truy vấn bằng cách sử dụng loại

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
303. Terms can be specified in the format.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
304, where float may be signed (and fractional), and unit can be
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
305 for the timedelta. Here’s an example

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
31

Query MultiIndex

Selecting from a

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 can be achieved by using the name of the level

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
32

If the

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 levels names are
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24, the levels are automatically made available via the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
309 keyword with
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
310 the level of the
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
76 you want to select from

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
33

Indexing

You can create/modify an index for a table with

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
312 after data is already in the table (after and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
313 operation). Creating a table index is highly encouraged. This will speed your queries a great deal when you use a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
253 with the indexed dimension as the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223

Note

Indexes are automagically created on the indexables and any data columns you specify. This behavior can be turned off by passing

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
316 to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
231

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
34

Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
35

Then create the index when finished appending

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
36

See here for how to create a completely-sorted-index (CSI) on an existing store

Query via data columns

You can designate (and index) certain columns that you want to be able to perform queries (other than the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
318 columns, which you can always query). For instance say you want to perform this common operation, on-disk, and return just the frame that matches this query. You can specify
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
319 to force all columns to be
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
259

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
37

There is some performance degradation by making lots of columns into

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
321, so it is up to the user to designate these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course you can simply read in the data and create a new table. )

Iterator

You can pass

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
95 or
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
323 to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
253 and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
325 to return an iterator on the results. The default is 50,000 rows returned in a chunk

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
38

Note

You can also use the iterator with

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
207 which will open, then automatically close the store when finished iterating

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
39

Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks

Here is a recipe for generating a query and using it to create equal sized return chunks

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
40

Advanced queries

Select a single column

To retrieve a single indexable or data column, use the method

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
327. This will, for example, enable you to get the index very quickly. These return a
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
62 of the result, indexed by the row number. These do not currently accept the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223 selector

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
41

Selecting coordinates

Sometimes you want to get the coordinates (a. k. a the index locations) of your query. This returns an

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
330 of the resulting locations. These coordinates can also be passed to subsequent
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223 operations

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
42

Chọn bằng cách sử dụng mặt nạ where

Sometime your query can involve creating a list of rows to select. Usually this

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
332 would be a resulting
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
42 from an indexing operation. This example selects the months of a datetimeindex which are 5

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
43

Storer object

If you want to inspect the stored object, retrieve via

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
334. You could use this programmatically to say get the number of rows in an object

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
44

Multiple table queries

The methods

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
335 and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
325 can perform appending/selecting from multiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and perform your queries. The other table(s) are data tables with an index matching the selector table’s index. You can then perform a very fast query on the selector table, yet get lots of data back. This method is similar to having a very wide table, but enables more efficient queries

Phương pháp

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
335 chia một Khung dữ liệu đơn nhất định thành nhiều bảng theo
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
338, một từ điển ánh xạ tên bảng thành danh sách 'cột' bạn muốn trong bảng đó. Nếu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
24 được sử dụng thay cho danh sách, bảng đó sẽ có các cột không xác định còn lại của Khung dữ liệu đã cho. The argument
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
340 defines which table is the selector table (which you can make queries from). The argument
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
341 will drop rows from the input
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 to ensure tables are synchronized. This means that if a row for one of the tables being written to is entirely
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
343, that row will be dropped from all tables

If

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
341 is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that entirely
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
345 rows are not written to the HDFStore, so if you choose to call
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
346, some tables may have more rows than others, and therefore
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
325 may not work or it may return unexpected results

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
45

Delete from a table

You can delete from a table selectively by specifying a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223. In deleting rows, it is important to understand the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very expensive operation depending on the orientation of your data. To get optimal performance, it’s worthwhile to have the dimension you are deleting be the first of the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
350

Data is ordered (on the disk) in terms of the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
350. Here’s a simple use case. You store panel-type data, with dates in the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
283 and ids in the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
353. The data is then interleaved like this

  • date_1
    • id_1

    • id_2

    • .

    • id_n

  • date_2
    • id_1

    • .

    • id_n

It should be clear that a delete operation on the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
283 will be fairly quick, as one chunk is removed, then the following data moved. On the other hand a delete operation on the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
353 will be very expensive. In this case it would almost certainly be faster to rewrite the table using a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223 that selects all but the missing data

Warning

Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE

To repack and clean the file, use

Notes & caveats

Compression

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 allows the stored data to be compressed. This applies to all kinds of stores, not just tables. Two parameters are used to control compression.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
358 and
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
359

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    358 specifies if and how hard data is to be compressed.
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    361 and
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    362 disables compression and
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    363 enables compression

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    359 specifies which compression library to use. If nothing is specified the default library
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    365 is used. A compression library usually optimizes for either good compression rates or speed and the results will depend on the type of data. Which type of compression to choose depends on your specific needs and data. The list of supported compression libraries

    • zlib. The default compression library. A classic in terms of compression, achieves good compression rates but is somewhat slow

    • lzo. Fast compression and decompression

    • bzip2. Good compression rates

    • blosc. Fast compression and decompression

      Support for alternative blosc compressors

      • blosc. blosclz This is the default compressor for

        In [6]: data = "col1,col2,col3\na,b,1"
        
        In [7]: df = pd.read_csv(StringIO(data))
        
        In [8]: df.columns = [f"pre_{col}" for col in df.columns]
        
        In [9]: df
        Out[9]: 
          pre_col1 pre_col2  pre_col3
        0        a        b         1
        
        366

      • blosc. lz4. A compact, very popular and fast compressor

      • blosc. lz4hc. A tweaked version of LZ4, produces better compression ratios at the expense of speed

      • blosc. snappy. A popular compressor used in many places

      • blosc. zlib. A classic; somewhat slower than the previous ones, but achieving better compression ratios

      • blosc. zstd. An extremely well balanced codec; it provides the best compression ratios among the others above, and at reasonably fast speed

    Nếu

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    359 được định nghĩa là một cái gì đó khác với các thư viện được liệt kê, một ngoại lệ
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    27 sẽ được ban hành

Note

If the library specified with the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
359 option is missing on your platform, compression defaults to
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
365 without further ado

Enable compression for all objects within the file

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
46

Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
47

ptrepack

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 offers better write performance when tables are compressed after they are written, as opposed to turning on compression at the very beginning. You can use the supplied
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 utility
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
373. In addition,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
373 can change compression levels after the fact

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
48

Furthermore

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
375 will repack the file to allow you to reuse previously deleted space. Alternatively, one can simply remove the file and write again, or use the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
376 method

Caveats

Warning

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 is not-threadsafe for writing. The underlying
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 only supports concurrent reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more information

  • If you use locks to manage write access between multiple processes, you may want to use before releasing write locks. For convenience you can use

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    380 to do this for you

  • Once a

    In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
    Out[39]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    04 is created columns (DataFrame) are fixed; only exactly the same columns can be appended

  • Be aware that timezones (e. g. ,

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    382) are not necessarily equal across timezone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone library and that data is updated with another version, the data will be converted to UTC since these timezones are not considered equal. Either use the same version of timezone library or use
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    383 with the updated timezone definition

Warning

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 will show a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
385 if a column name cannot be used as an attribute selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number. Other identifiers cannot be used in a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
223 clause and are generally a bad idea

DataTypes

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 will map an object dtype to the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 underlying dtype. This means the following types are known to work

Type

Represents missing values

floating .

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
389

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
248

integer .

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
391

boolean

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
392

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
19

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
303

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
19

phân loại. xem phần bên dưới

mục tiêu.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
396

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
248

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
398 cột không được hỗ trợ và SẼ KHÔNG ĐẠT

Dữ liệu phân loại

Bạn có thể ghi dữ liệu chứa

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
399 dtypes vào một
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205. Các truy vấn hoạt động giống như thể nó là một mảng đối tượng. Tuy nhiên, dữ liệu đã nhập của
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
399 được lưu trữ theo cách hiệu quả hơn

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
49

cột chuỗi

min_itemsize

Việc triển khai cơ bản của

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 sử dụng chiều rộng cột cố định (kích thước vật phẩm) cho các cột chuỗi. Kích thước mục của cột chuỗi được tính bằng độ dài tối đa của dữ liệu (đối với cột đó) được chuyển đến
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205, trong phần nối thêm đầu tiên. Các phần bổ sung tiếp theo, có thể giới thiệu một chuỗi cho một cột lớn hơn cột có thể chứa, một Ngoại lệ sẽ được đưa ra (nếu không, bạn có thể cắt ngắn các cột này, dẫn đến mất thông tin). Trong tương lai, chúng tôi có thể nới lỏng điều này và cho phép xảy ra việc cắt ngắn do người dùng chỉ định

Vượt qua

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
404 trong lần tạo bảng đầu tiên để a-priori chỉ định độ dài tối thiểu của một cột chuỗi cụ thể.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
404 có thể là số nguyên hoặc ánh xạ chính tả tên cột thành số nguyên. Bạn có thể chuyển
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
03 làm khóa để cho phép tất cả các mục có thể lập chỉ mục hoặc cột dữ liệu có kích thước tối thiểu này

Passing a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
404 dict will cause all passed columns to be created as data_columns automatically

Note

If you are not passing any

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
259, then the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
404 will be the maximum of the length of any string passed

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
50

nan_rep

String columns will serialize a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
248 (a missing value) with the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
411 string representation. This defaults to the string value
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
249. You could inadvertently turn an actual
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
249 value into a missing value

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
51

External compatibility

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 writes
In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[39]: 
col1    category
col2      object
col3       int64
dtype: object
04 format objects in specific formats suitable for producing loss-less round trips to pandas objects. For external compatibility,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 can read native
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
213 format tables

It is possible to write an

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
205 object that can easily be imported into
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
419 using the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
420 library (Package website). Create a table format store like this

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
52

In R this file can be read into a

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
421 object using the
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
420 library. The following example function reads the corresponding column names and data values from the values and assembles them into a
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
421

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
53

Now you can import the

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 into R

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
54

Note

The R function lists the entire HDF5 file’s contents and assembles the

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
421 object from all matching nodes, so use this only as a starting point if you have stored multiple
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 objects to a single HDF5 file

Performance

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    427 format come with a writing performance penalty as compared to
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    214 stores. The benefit is the ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis

  • You can pass

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    429 to
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    231, specifying the write chunksize (default is 50000). Điều này sẽ làm giảm đáng kể mức sử dụng bộ nhớ của bạn khi viết

  • You can pass

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    431 to the first
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    231, to set the TOTAL number of rows that
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    213 will expect. This will optimize read/write performance

  • Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)

  • A

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    434 will be raised if you are attempting to store types that will be pickled by PyTables (rather than stored as endemic types). See for more information and some solutions

Lông vũ

Feather cung cấp tuần tự hóa cột nhị phân cho các khung dữ liệu. Nó được thiết kế để làm cho việc đọc và ghi các khung dữ liệu trở nên hiệu quả và giúp việc chia sẻ dữ liệu giữa các ngôn ngữ phân tích dữ liệu trở nên dễ dàng

Feather được thiết kế để tuần tự hóa và hủy tuần tự hóa DataFrames một cách trung thực, hỗ trợ tất cả các kiểu dữ liệu gấu trúc, bao gồm cả các kiểu mở rộng như phân loại và thời gian với tz

Một số lưu ý

  • Định dạng sẽ KHÔNG ghi

    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    20 hoặc
    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    76 cho
    In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    43 và sẽ gây ra lỗi nếu cung cấp một lỗi không mặc định. Bạn có thể
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    438 để lưu trữ chỉ mục hoặc
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    439 để bỏ qua nó

  • Tên cột trùng lặp và tên cột không phải chuỗi không được hỗ trợ

  • Các đối tượng Python thực tế trong các cột dtype đối tượng không được hỗ trợ. Những điều này sẽ đưa ra một thông báo lỗi hữu ích khi cố gắng tuần tự hóa

Xem tài liệu đầy đủ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
55

Ghi vào một tập tin lông vũ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
56

Đọc từ tệp lông vũ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
57

Parquet

Apache Parquet cung cấp tuần tự hóa cột nhị phân được phân vùng cho các khung dữ liệu. Nó được thiết kế để làm cho việc đọc và ghi các khung dữ liệu trở nên hiệu quả và giúp việc chia sẻ dữ liệu giữa các ngôn ngữ phân tích dữ liệu trở nên dễ dàng. Sàn gỗ có thể sử dụng nhiều kỹ thuật nén khác nhau để thu nhỏ kích thước tệp càng nhiều càng tốt trong khi vẫn duy trì hiệu suất đọc tốt

Sàn gỗ được thiết kế để sắp xếp và hủy đánh số thứ tự một cách trung thực

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 s, hỗ trợ tất cả các kiểu dtype của gấu trúc, bao gồm cả các kiểu mở rộng như datetime với tz

Một số lưu ý

  • Tên cột trùng lặp và tên cột không phải chuỗi không được hỗ trợ

  • Công cụ

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    97 luôn ghi chỉ mục vào đầu ra, nhưng
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    442 chỉ ghi các chỉ mục không mặc định. Cột bổ sung này có thể gây ra sự cố cho những người tiêu dùng không phải là pandas không mong đợi điều đó. Bạn có thể buộc bao gồm hoặc bỏ qua các chỉ mục bằng đối số
    In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"
    
    In [36]: pd.read_csv(StringIO(data))
    Out[36]: 
      col1 col2  col3
    0    a    b     1
    1    a    b     2
    2    c    d     3
    
    In [37]: pd.read_csv(StringIO(data)).dtypes
    Out[37]: 
    col1    object
    col2    object
    col3     int64
    dtype: object
    
    In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
    Out[38]: 
    col1    category
    col2    category
    col3    category
    dtype: object
    
    42, bất kể công cụ cơ bản là gì

  • Tên cấp chỉ mục, nếu được chỉ định, phải là chuỗi

  • Trong công cụ

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    97, các kiểu dữ liệu phân loại cho các loại không phải chuỗi có thể được đánh số thứ tự thành sàn gỗ, nhưng sẽ hủy đánh số thứ tự như kiểu dữ liệu nguyên thủy của chúng

  • Công cụ

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    97 duy trì cờ
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    34 của các kiểu dữ liệu phân loại với các loại chuỗi.
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    442 không giữ cờ
    In [40]: from pandas.api.types import CategoricalDtype
    
    In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True)
    
    In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes
    Out[42]: 
    col1    category
    col2      object
    col3       int64
    dtype: object
    
    34

  • Các loại không được hỗ trợ bao gồm

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    449 và các loại đối tượng Python thực tế. Những điều này sẽ đưa ra một thông báo lỗi hữu ích khi cố gắng tuần tự hóa. Loại
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    450 được hỗ trợ với pyarrow >= 0. 16. 0

  • The

    In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))
    
    In [30]: df = pd.DataFrame({"col_1": col_1})
    
    In [31]: df.to_csv("foo.csv")
    
    In [32]: mixed_df = pd.read_csv("foo.csv")
    
    In [33]: mixed_df["col_1"].apply(type).value_counts()
    Out[33]: 
        737858
        262144
    Name: col_1, dtype: int64
    
    In [34]: mixed_df["col_1"].dtype
    Out[34]: dtype('O')
    
    97 engine preserves extension data types such as the nullable integer and string data type (requiring pyarrow >= 0. 16. 0 và yêu cầu loại tiện ích mở rộng triển khai các giao thức cần thiết, xem phần )

Bạn có thể chỉ định một

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
136 để điều khiển quá trình lập số sê-ri. Đây có thể là một trong số
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
97 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
442 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
455. Nếu động cơ KHÔNG được chỉ định, thì tùy chọn
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
456 sẽ được chọn;

Xem tài liệu về pyarrow và fastparquet

Note

Các công cụ này rất giống nhau và nên đọc/ghi các tệp định dạng sàn gỗ gần như giống hệt nhau.

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
460 hỗ trợ dữ liệu timedelta,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
461 hỗ trợ thời gian nhận biết múi giờ. Các thư viện này khác nhau do có các phụ thuộc cơ bản khác nhau (_______57_______442 bằng cách sử dụng
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
463, trong khi
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
97 sử dụng thư viện c)

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
58

Ghi vào một tập tin sàn gỗ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
59

Đọc từ một tập tin sàn gỗ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
60

Chỉ đọc một số cột nhất định của tệp sàn gỗ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
61

Xử lý chỉ mục

Nối tiếp một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 thành sàn gỗ có thể bao gồm chỉ mục ẩn dưới dạng một hoặc nhiều cột trong tệp đầu ra. Như vậy, mã này

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
62

tạo một tệp sàn gỗ có ba cột nếu bạn sử dụng

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
97 để tuần tự hóa.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
467,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
468 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
469. Nếu bạn đang sử dụng
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
442, chỉ mục sẽ được ghi vào tệp

Cột bổ sung không mong muốn này khiến một số cơ sở dữ liệu như Amazon Redshift từ chối tệp vì cột đó không tồn tại trong bảng đích

Nếu bạn muốn bỏ qua các chỉ mục của khung dữ liệu khi viết, hãy chuyển

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
316 tới

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
63

Điều này tạo ra một tệp sàn gỗ chỉ với hai cột dự kiến,

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
467 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
468. Nếu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 của bạn có chỉ mục tùy chỉnh, bạn sẽ không lấy lại được chỉ mục đó khi tải tệp này vào
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43

Vượt qua

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
477 sẽ luôn ghi chỉ mục, ngay cả khi đó không phải là hành vi mặc định của công cụ cơ bản

Phân vùng tập tin Parquet

Sàn gỗ hỗ trợ phân vùng dữ liệu dựa trên giá trị của một hoặc nhiều cột

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
64

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
478 chỉ định thư mục mẹ mà dữ liệu sẽ được lưu vào.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
479 là các tên cột mà tập dữ liệu sẽ được phân vùng. Các cột được phân vùng theo thứ tự chúng được cung cấp. Sự phân chia phân vùng được xác định bởi các giá trị duy nhất trong các cột phân vùng. Ví dụ trên tạo một tập dữ liệu được phân vùng có thể trông giống như

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
65

ORC

New in version 1. 0. 0

Tương tự như định dạng, Định dạng ORC là tuần tự hóa cột nhị phân cho các khung dữ liệu. Nó được thiết kế để làm cho việc đọc khung dữ liệu hiệu quả. gấu trúc cung cấp cả trình đọc và trình ghi cho định dạng ORC và. Điều này yêu cầu thư viện pyarrow

Warning

  • Rất nên cài đặt pyarrow bằng conda do một số sự cố xảy ra bởi pyarrow

  • yêu cầu pyarrow>=7. 0. 0

  • và chưa được hỗ trợ trên Windows, bạn có thể tìm các môi trường hợp lệ trên

  • Đối với dtypes được hỗ trợ, vui lòng tham khảo

  • Các múi giờ hiện tại trong các cột ngày giờ không được giữ nguyên khi khung dữ liệu được chuyển đổi thành tệp ORC

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
66

Ghi vào tệp orc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
67

Đọc từ tệp orc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
68

Chỉ đọc một số cột nhất định của tệp orc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
69

truy vấn SQL

Mô-đun

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
485 cung cấp một tập hợp các trình bao bọc truy vấn để vừa hỗ trợ truy xuất dữ liệu vừa giảm sự phụ thuộc vào API dành riêng cho DB. Trừu tượng hóa cơ sở dữ liệu được cung cấp bởi SQLAlchemy nếu được cài đặt. In addition you will need a driver library for your database. Ví dụ về các trình điều khiển như vậy là psycopg2 cho PostgreSQL hoặc pymysql cho MySQL. Đối với SQLite, điều này được bao gồm trong thư viện chuẩn của Python theo mặc định. Bạn có thể tìm thấy tổng quan về các trình điều khiển được hỗ trợ cho từng phương ngữ SQL trong tài liệu SQLAlchemy

Nếu SQLAlchemy chưa được cài đặt, dự phòng chỉ được cung cấp cho sqlite (và cho mysql để tương thích ngược, nhưng điều này không được dùng nữa và sẽ bị xóa trong phiên bản tương lai). Chế độ này yêu cầu bộ điều hợp cơ sở dữ liệu Python tôn trọng Python DB-API

Xem thêm một số chiến lược nâng cao

Các chức năng chính là

(tên_bảng, con[, lược đồ,. ])

Đọc bảng cơ sở dữ liệu SQL vào DataFrame

(sql, con[, index_col,. ])

Đọc truy vấn SQL vào DataFrame

(sql, con[, index_col,. ])

Đọc truy vấn SQL hoặc bảng cơ sở dữ liệu vào DataFrame

(tên, con[, sơ đồ,. ])

Ghi các bản ghi được lưu trữ trong DataFrame vào cơ sở dữ liệu SQL

Note

Chức năng này là một trình bao bọc tiện lợi xung quanh và (và để tương thích ngược) và sẽ ủy quyền cho chức năng cụ thể tùy thuộc vào đầu vào được cung cấp (tên bảng cơ sở dữ liệu hoặc truy vấn sql). Tên bảng không cần trích dẫn nếu có ký tự đặc biệt

In the following example, we use the SQlite SQL database engine. Bạn có thể sử dụng cơ sở dữ liệu SQLite tạm thời nơi dữ liệu được lưu trữ trong “bộ nhớ”

Để kết nối với SQLAlchemy, bạn sử dụng hàm

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
493 để tạo đối tượng công cụ từ URI cơ sở dữ liệu. Bạn chỉ cần tạo công cụ một lần cho mỗi cơ sở dữ liệu mà bạn đang kết nối. Để biết thêm thông tin về
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
493 và định dạng URI, hãy xem các ví dụ bên dưới và tài liệu SQLAlchemy

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
70

Nếu bạn muốn quản lý các kết nối của riêng mình, bạn có thể chuyển một trong các kết nối đó. Ví dụ bên dưới mở một kết nối tới cơ sở dữ liệu bằng trình quản lý bối cảnh Python tự động đóng kết nối sau khi khối hoàn thành. Xem phần giải thích về cách xử lý kết nối cơ sở dữ liệu

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
71

Warning

Khi bạn mở một kết nối tới cơ sở dữ liệu, bạn cũng chịu trách nhiệm đóng nó. Tác dụng phụ của việc mở kết nối có thể bao gồm khóa cơ sở dữ liệu hoặc hành vi vi phạm khác

Viết DataFrames

Giả sử dữ liệu sau nằm trong một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
56, chúng ta có thể chèn nó vào cơ sở dữ liệu bằng cách sử dụng

Tôi

Ngày

Cột_1

Cột_2

Cột_3

26

2012-10-18

X

25. 7

Thật

42

2012-10-19

Y

-12. 4

Sai

63

2012-10-20

Z

5. 73

Thật

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
72

Với một số cơ sở dữ liệu, việc ghi DataFrames lớn có thể dẫn đến lỗi do vượt quá giới hạn kích thước gói. Điều này có thể tránh được bằng cách đặt tham số

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 khi gọi
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
499. Ví dụ: phần sau ghi
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
56 vào cơ sở dữ liệu theo lô 1000 hàng cùng một lúc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
73

kiểu dữ liệu SQL

sẽ cố gắng ánh xạ dữ liệu của bạn sang loại dữ liệu SQL thích hợp dựa trên loại dữ liệu. Khi bạn có các cột dtype

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72, gấu trúc sẽ cố gắng suy ra kiểu dữ liệu

Bạn luôn có thể ghi đè loại mặc định bằng cách chỉ định loại SQL mong muốn của bất kỳ cột nào bằng cách sử dụng đối số

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
88. Đối số này cần tên cột ánh xạ từ điển tới các loại SQLAlchemy (hoặc chuỗi cho chế độ dự phòng sqlite3). Ví dụ: chỉ định sử dụng loại sqlalchemy
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
504 thay vì loại
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
505 mặc định cho các cột chuỗi

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
74

Note

Do sự hỗ trợ hạn chế cho timedelta trong các hương vị cơ sở dữ liệu khác nhau, các cột có loại

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
506 sẽ được ghi dưới dạng giá trị số nguyên dưới dạng nano giây vào cơ sở dữ liệu và cảnh báo sẽ được đưa ra

Note

Các cột của

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
399 dtype sẽ được chuyển thành biểu diễn dày đặc như bạn sẽ nhận được với
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
508 (e. g. đối với các danh mục chuỗi, điều này mang lại một chuỗi các chuỗi). Do đó, việc đọc lại bảng cơ sở dữ liệu không tạo ra một phân loại

kiểu dữ liệu ngày giờ

Sử dụng SQLAlchemy, có khả năng ghi dữ liệu ngày giờ không rõ múi giờ hoặc nhận biết múi giờ. Tuy nhiên, dữ liệu kết quả được lưu trữ trong cơ sở dữ liệu cuối cùng phụ thuộc vào loại dữ liệu được hỗ trợ cho dữ liệu ngày giờ của hệ thống cơ sở dữ liệu đang được sử dụng

Bảng sau đây liệt kê các kiểu dữ liệu được hỗ trợ cho dữ liệu ngày giờ đối với một số cơ sở dữ liệu phổ biến. Các phương ngữ cơ sở dữ liệu khác có thể có các loại dữ liệu khác nhau cho dữ liệu ngày giờ

cơ sở dữ liệu

Các kiểu ngày giờ SQL

Hỗ trợ múi giờ

SQLite

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
510

Không

mysql

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
511 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
512

Không

PostgreSQL

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
511 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
514

Đúng

Khi ghi dữ liệu nhận biết múi giờ vào cơ sở dữ liệu không hỗ trợ múi giờ, dữ liệu sẽ được ghi dưới dạng dấu thời gian ngây thơ múi giờ theo giờ địa phương đối với múi giờ

cũng có khả năng đọc dữ liệu ngày giờ nhận biết múi giờ hoặc ngây thơ. Khi đọc các loại

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
514, gấu trúc sẽ chuyển dữ liệu sang UTC

phương pháp chèn

Tham số

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
517 kiểm soát mệnh đề chèn SQL được sử dụng. giá trị có thể là

  • In [13]: import numpy as np
    
    In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"
    
    In [15]: print(data)
    a,b,c,d
    1,2,3,4
    5,6,7,8
    9,10,11
    
    In [16]: df = pd.read_csv(StringIO(data), dtype=object)
    
    In [17]: df
    Out[17]: 
       a   b   c    d
    0  1   2   3    4
    1  5   6   7    8
    2  9  10  11  NaN
    
    In [18]: df["a"][0]
    Out[18]: '1'
    
    In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})
    
    In [20]: df.dtypes
    Out[20]: 
    a      int64
    b     object
    c    float64
    d      Int64
    dtype: object
    
    24. Sử dụng mệnh đề SQL
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    519 tiêu chuẩn (một trên mỗi hàng)

  • In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    520. Pass multiple values in a single
    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    519 clause. Nó sử dụng một cú pháp SQL đặc biệt không được hỗ trợ bởi tất cả các chương trình phụ trợ. Điều này thường mang lại hiệu suất tốt hơn cho các cơ sở dữ liệu phân tích như Presto và Redshift, nhưng lại có hiệu suất kém hơn đối với phần phụ trợ SQL truyền thống nếu bảng chứa nhiều cột. Để biết thêm thông tin, hãy kiểm tra SQLAlchemy

  • có thể gọi được với chữ ký

    In [6]: data = "col1,col2,col3\na,b,1"
    
    In [7]: df = pd.read_csv(StringIO(data))
    
    In [8]: df.columns = [f"pre_{col}" for col in df.columns]
    
    In [9]: df
    Out[9]: 
      pre_col1 pre_col2  pre_col3
    0        a        b         1
    
    522. Điều này có thể được sử dụng để triển khai phương thức chèn hiệu quả hơn dựa trên các tính năng phương ngữ phụ trợ cụ thể

Ví dụ về một mệnh đề có thể gọi được bằng PostgreSQL COPY

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
75

bàn đọc sách

sẽ đọc một bảng cơ sở dữ liệu được cung cấp tên bảng và tùy chọn một tập hợp con các cột để đọc

Note

Để sử dụng, bạn phải cài đặt phần phụ thuộc tùy chọn SQLAlchemy

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
76

Note

Lưu ý rằng gấu trúc suy ra các kiểu cột từ đầu ra truy vấn chứ không phải bằng cách tra cứu các loại dữ liệu trong lược đồ cơ sở dữ liệu vật lý. Ví dụ: giả sử

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
525 là một cột số nguyên trong bảng. Sau đó, theo trực giác,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
526 sẽ trả về chuỗi giá trị số nguyên, trong khi
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
527 sẽ trả về chuỗi giá trị đối tượng (str). Theo đó, nếu đầu ra truy vấn trống, thì tất cả các cột kết quả sẽ được trả về dưới dạng giá trị đối tượng (vì chúng là tổng quát nhất). Nếu bạn thấy trước rằng truy vấn của mình đôi khi sẽ tạo ra một kết quả trống, thì bạn có thể muốn đánh máy rõ ràng sau đó để đảm bảo tính toàn vẹn của dtype

Bạn cũng có thể chỉ định tên của cột là chỉ mục

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 và chỉ định một tập hợp con các cột sẽ được đọc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
77

Và bạn rõ ràng có thể buộc các cột được phân tích thành ngày

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
78

If needed you can explicitly specify a format string, or a dict of arguments to pass to

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
79

Bạn có thể kiểm tra xem một bảng có tồn tại hay không bằng cách sử dụng

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
530

hỗ trợ lược đồ

Việc đọc và ghi vào các lược đồ khác nhau được hỗ trợ thông qua từ khóa ________ 225 ______ 16 trong hàm và. Tuy nhiên, lưu ý rằng điều này phụ thuộc vào hương vị cơ sở dữ liệu (sqlite không có lược đồ). Ví dụ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
80

truy vấn

Bạn có thể truy vấn bằng SQL thô trong hàm. Trong trường hợp này, bạn phải sử dụng biến thể SQL phù hợp với cơ sở dữ liệu của mình. Khi sử dụng SQLAlchemy, bạn cũng có thể chuyển các cấu trúc ngôn ngữ Biểu thức SQLAlchemy, không liên quan đến cơ sở dữ liệu

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
81

Tất nhiên, bạn có thể chỉ định một truy vấn “phức tạp” hơn

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
82

Hàm hỗ trợ đối số

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90. Việc chỉ định điều này sẽ trả về một trình vòng lặp thông qua các đoạn kết quả truy vấn

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
83

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
84

Bạn cũng có thể chạy một truy vấn đơn giản mà không cần tạo một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 với
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
538. Điều này hữu ích cho các truy vấn không trả về giá trị, chẳng hạn như INSERT. Điều này về mặt chức năng tương đương với việc gọi
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
539 trên công cụ SQLAlchemy hoặc đối tượng kết nối db. Again, you must use the SQL syntax variant appropriate for your database

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
85

Ví dụ kết nối động cơ

Để kết nối với SQLAlchemy, bạn sử dụng hàm

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
493 để tạo đối tượng công cụ từ URI cơ sở dữ liệu. Bạn chỉ cần tạo công cụ một lần cho mỗi cơ sở dữ liệu mà bạn đang kết nối

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
86

Để biết thêm thông tin, hãy xem các ví dụ về tài liệu SQLAlchemy

Truy vấn SQLAlchemy nâng cao

Bạn có thể sử dụng các cấu trúc SQLAlchemy để mô tả truy vấn của mình

Sử dụng

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
541 để chỉ định các tham số truy vấn theo cách trung lập với phụ trợ

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
87

Nếu bạn có một mô tả SQLAlchemy về cơ sở dữ liệu của mình, bạn có thể biểu thị các điều kiện ở đâu bằng các biểu thức SQLAlchemy

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
88

Bạn có thể kết hợp các biểu thức SQLAlchemy với các tham số được chuyển đến bằng cách sử dụng

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
543

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
89

dự phòng Sqlite

Việc sử dụng sqlite được hỗ trợ mà không cần sử dụng SQLAlchemy. Chế độ này yêu cầu bộ điều hợp cơ sở dữ liệu Python tôn trọng Python DB-API

Bạn có thể tạo các kết nối như vậy

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
90

And then issue the following queries

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
91

Google BigQuery

Warning

bắt đầu bằng 0. 20. 0, pandas đã tách hỗ trợ Google BigQuery thành gói riêng biệt

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
544. Bạn có thể
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
545 để lấy nó

Gói

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
544 cung cấp chức năng đọc/ghi từ Google BigQuery

gấu trúc tích hợp với gói bên ngoài này. nếu

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
544 được cài đặt, bạn có thể sử dụng các phương thức của gấu trúc là
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
548 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
549, phương thức này sẽ gọi các hàm tương ứng từ
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
544

Tài liệu đầy đủ có thể được tìm thấy ở đây

định dạng thống kê

Ghi vào định dạng stata

Phương thức

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
551 sẽ ghi một DataFrame thành một. tập tin dta. Phiên bản định dạng của tệp này luôn là 115 (Stata 12)

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
92

Các tệp dữ liệu Stata có hỗ trợ loại dữ liệu hạn chế; . Ngoài ra, Stata dự trữ các giá trị nhất định để biểu thị dữ liệu bị thiếu. Xuất một giá trị không bị thiếu nằm ngoài phạm vi được phép trong Stata cho một loại dữ liệu cụ thể sẽ nhập lại biến có kích thước lớn hơn tiếp theo. Ví dụ: các giá trị

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
552 bị hạn chế nằm trong khoảng từ -127 đến 100 trong Stata và do đó, các biến có giá trị trên 100 sẽ kích hoạt chuyển đổi thành
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
553.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
249 values in floating points data types are stored as the basic missing data type (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
561 in Stata)

Note

Không thể xuất giá trị dữ liệu bị thiếu cho kiểu dữ liệu số nguyên

Người viết Stata xử lý một cách duyên dáng các loại dữ liệu khác bao gồm

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
562,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
563,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
564,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
565,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
566 bằng cách chuyển sang loại được hỗ trợ nhỏ nhất có thể biểu thị dữ liệu. Ví dụ: dữ liệu có loại
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
564 sẽ được chuyển thành
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
552 nếu tất cả các giá trị nhỏ hơn 100 (giới hạn trên đối với dữ liệu
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
552 không bị thiếu trong Stata) hoặc, nếu các giá trị nằm ngoài phạm vi này, biến sẽ được chuyển thành

Warning

Chuyển đổi từ

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
562 thành
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
556 có thể dẫn đến mất độ chính xác nếu giá trị
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
562 lớn hơn 2**53

Warning

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
574 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
551 chỉ hỗ trợ các chuỗi có độ rộng cố định chứa tối đa 244 ký tự, giới hạn do định dạng tệp dta phiên bản 115 áp đặt. Cố gắng ghi các tệp Stata dta với các chuỗi dài hơn 244 ký tự làm tăng
In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
27

Đọc từ định dạng Stata

Hàm cấp cao nhất

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
577 sẽ đọc tệp dta và trả về
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
579 có thể được sử dụng để đọc tệp tăng dần

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
93

Chỉ định một

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 mang lại một phiên bản
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
579 có thể được sử dụng để đọc
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 dòng từ tệp cùng một lúc. Đối tượng
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
579 có thể được sử dụng như một trình vòng lặp

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
94

Để kiểm soát chi tiết hơn, hãy sử dụng

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
95 và chỉ định
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 với mỗi lệnh gọi tới
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
18

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
95

Hiện tại,

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [36]: pd.read_csv(StringIO(data))
Out[36]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [37]: pd.read_csv(StringIO(data)).dtypes
Out[37]: 
col1    object
col2    object
col3     int64
dtype: object

In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[38]: 
col1    category
col2    category
col3    category
dtype: object
42 được truy xuất dưới dạng một cột

Tham số

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
588 cho biết có nên đọc và sử dụng nhãn giá trị để tạo biến
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 từ chúng hay không. Nhãn giá trị cũng có thể được truy xuất bằng hàm
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
590, hàm này yêu cầu phải gọi
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
18 trước khi sử dụng

Tham số

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
592 cho biết liệu các biểu diễn giá trị bị thiếu trong Stata có nên được giữ lại hay không. If
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
61 (the default), missing values are represented as
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
248. Nếu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32, các giá trị bị thiếu được biểu diễn bằng các đối tượng
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
596 và các cột chứa các giá trị bị thiếu sẽ có kiểu dữ liệu
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
72

Note

và hỗ trợ

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
579. định dạng dta 113-115 (Stata 10-12), 117 (Stata 13) và 118 (Stata 14)

Note

Cài đặt

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
600 sẽ chuyển sang kiểu dữ liệu tiêu chuẩn của gấu trúc.
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
562 cho tất cả các loại số nguyên và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
556 cho dữ liệu dấu phẩy động. Theo mặc định, kiểu dữ liệu Stata được giữ nguyên khi nhập

Dữ liệu phân loại

Dữ liệu

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 có thể được xuất sang tệp dữ liệu Stata dưới dạng dữ liệu được gắn nhãn giá trị. Dữ liệu đã xuất bao gồm các mã danh mục cơ bản dưới dạng giá trị dữ liệu số nguyên và danh mục dưới dạng nhãn giá trị. Stata không có tương đương rõ ràng với
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 và thông tin về việc biến có được sắp xếp hay không bị mất khi xuất

Warning

Stata chỉ hỗ trợ các nhãn giá trị chuỗi và do đó,

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
15 được gọi trên các danh mục khi xuất dữ liệu. Việc xuất các biến
In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 với các danh mục không phải chuỗi sẽ tạo ra một cảnh báo và có thể dẫn đến mất thông tin nếu các đại diện của các danh mục
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
15 không phải là duy nhất

Tương tự, dữ liệu được gắn nhãn có thể được nhập từ các tệp dữ liệu Stata dưới dạng biến

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 bằng cách sử dụng đối số từ khóa
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
588 (
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32 theo mặc định). Đối số từ khóa
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
611 (
In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
32 theo mặc định) xác định xem các biến ______193_______24 đã nhập có được sắp xếp hay không

Note

Khi nhập dữ liệu phân loại, giá trị của các biến trong tệp dữ liệu Stata không được bảo toàn do biến

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 luôn sử dụng kiểu dữ liệu số nguyên trong khoảng từ
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
615 đến
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
616 trong đó
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
310 là số lượng phân loại. Nếu các giá trị gốc trong tệp dữ liệu Stata được yêu cầu, thì có thể nhập các giá trị này bằng cách đặt ____57_______618, thao tác này sẽ nhập dữ liệu gốc (nhưng không nhập các nhãn biến). Các giá trị gốc có thể khớp với dữ liệu phân loại đã nhập vì có một ánh xạ đơn giản giữa các giá trị dữ liệu Stata gốc và mã danh mục của các biến Phân loại đã nhập. các giá trị bị thiếu được gán mã
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
615 và giá trị ban đầu nhỏ nhất được gán ___4_______84, giá trị nhỏ thứ hai được gán ___57_______621, v.v. cho đến khi giá trị gốc lớn nhất được gán mã
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
616

Note

Stata hỗ trợ sê-ri được dán nhãn một phần. Các chuỗi này có nhãn giá trị cho một số nhưng không phải tất cả các giá trị dữ liệu. Nhập chuỗi được gắn nhãn một phần sẽ tạo ra một

In [25]: df2 = pd.read_csv(StringIO(data))

In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce")

In [27]: df2
Out[27]: 
   col_1
0   1.00
1   2.00
2    NaN
3   4.22

In [28]: df2["col_1"].apply(type).value_counts()
Out[28]: 
    4
Name: col_1, dtype: int64
24 với các danh mục chuỗi cho các giá trị được gắn nhãn và danh mục số cho các giá trị không có nhãn

định dạng SAS

Hàm cấp cao nhất có thể đọc (nhưng không ghi) SAS XPORT (. xpt) và (kể từ v0. 18. 0) SAS7BDAT (. sas7bdat) định dạng tập tin

Tệp SAS chỉ chứa hai loại giá trị. Văn bản ASCII và giá trị dấu phẩy động (thường là 8 byte nhưng đôi khi bị cắt bớt). Đối với tệp xuất, không có chuyển đổi loại tự động thành số nguyên, ngày hoặc phân loại. Đối với các tệp SAS7BDAT, mã định dạng có thể cho phép các biến ngày được tự động chuyển đổi thành ngày. Theo mặc định, toàn bộ tệp được đọc và trả về dưới dạng

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43

Chỉ định

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
90 hoặc sử dụng
In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000))

In [30]: df = pd.DataFrame({"col_1": col_1})

In [31]: df.to_csv("foo.csv")

In [32]: mixed_df = pd.read_csv("foo.csv")

In [33]: mixed_df["col_1"].apply(type).value_counts()
Out[33]: 
    737858
    262144
Name: col_1, dtype: int64

In [34]: mixed_df["col_1"].dtype
Out[34]: dtype('O')
95 để lấy các đối tượng người đọc (
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
628 hoặc
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
629) để đọc tệp dần dần. Các đối tượng người đọc cũng có các thuộc tính chứa thông tin bổ sung về tệp và các biến của nó

Đọc tệp SAS7BDAT

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
96

Lấy một trình vòng lặp và đọc một tệp XPORT 100.000 dòng cùng một lúc

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
97

Thông số kỹ thuật cho định dạng tệp xport có sẵn trên trang web của SAS

Không có tài liệu chính thức nào cho định dạng SAS7BDAT

SPSS formats

New in version 0. 25. 0

Chức năng cấp cao nhất có thể đọc (nhưng không ghi) SPSS SAV (. sav) và ZSAV (. tệp định dạng zsav)

Tệp SPSS chứa tên cột. Theo mặc định, toàn bộ tệp được đọc, các cột phân loại được chuyển đổi thành _______57_______631 và _______4_______43 với tất cả các cột được trả về

Chỉ định tham số

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 để có được một tập hợp con các cột. Chỉ định
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
618 để tránh chuyển đổi các cột phân loại thành
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
631

Đọc một tệp SPSS

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
98

Trích xuất một tập hợp con các cột có trong

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
47 từ tệp SPSS và tránh chuyển đổi các cột phân loại thành
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
631

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3"

In [11]: pd.read_csv(StringIO(data))
Out[11]: 
  col1 col2  col3
0    a    b     1
1    a    b     2
2    c    d     3

In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0)
Out[12]: 
  col1 col2  col3
0    a    b     2
99

Thông tin thêm về các định dạng tệp SAV và ZSAV có tại đây

Các định dạng tệp khác

bản thân gấu trúc chỉ hỗ trợ IO với một bộ định dạng tệp giới hạn ánh xạ rõ ràng tới mô hình dữ liệu dạng bảng của nó. Để đọc và ghi các định dạng tệp khác vào và từ gấu trúc, chúng tôi khuyên dùng các gói này từ cộng đồng rộng lớn hơn

mạngCDF

xarray cung cấp cấu trúc dữ liệu lấy cảm hứng từ gấu trúc

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
43 để làm việc với bộ dữ liệu đa chiều, tập trung vào định dạng tệp netCDF và chuyển đổi dễ dàng sang và từ gấu trúc

cân nhắc hiệu suất

Đây là một so sánh không chính thức của các phương pháp IO khác nhau, sử dụng pandas 0. 24. 2. Timings are machine dependent and small differences should be ignored

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
00

Các chức năng kiểm tra sau đây sẽ được sử dụng bên dưới để so sánh hiệu suất của một số phương pháp IO

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
01

Khi viết, ba chức năng hàng đầu về tốc độ là

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
639,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
640 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
641

In [13]: import numpy as np

In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11"

In [15]: print(data)
a,b,c,d
1,2,3,4
5,6,7,8
9,10,11

In [16]: df = pd.read_csv(StringIO(data), dtype=object)

In [17]: df
Out[17]: 
   a   b   c    d
0  1   2   3    4
1  5   6   7    8
2  9  10  11  NaN

In [18]: df["a"][0]
Out[18]: '1'

In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"})

In [20]: df.dtypes
Out[20]: 
a      int64
b     object
c    float64
d      Int64
dtype: object
02

Khi đọc, ba chức năng hàng đầu về tốc độ là

In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
642,
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
643 và
In [6]: data = "col1,col2,col3\na,b,1"

In [7]: df = pd.read_csv(StringIO(data))

In [8]: df.columns = [f"pre_{col}" for col in df.columns]

In [9]: df
Out[9]: 
  pre_col1 pre_col2  pre_col3
0        a        b         1
644

Làm cách nào để chuyển đổi nhiều tệp văn bản thành nhiều CSV trong Python?

Các bước chuyển đổi tệp văn bản sang CSV bằng Python .
Bước 1. Cài đặt gói Pandas. Nếu bạn chưa làm như vậy, hãy cài đặt gói Pandas. .
Bước 2. Chụp đường dẫn nơi tệp văn bản của bạn được lưu trữ. .
Bước 3. Chỉ định đường dẫn nơi tệp CSV mới sẽ được lưu. .
Bước 4. Chuyển đổi tệp văn bản thành CSV bằng Python

Bạn có thể chuyển đổi tệp văn bản thành CSV bằng Python không?

You can convert a text file to a CSV file in Python in four simple steps. (1) Cài đặt thư viện Pandas, (2) nhập thư viện Pandas, (3) đọc tệp CSV dưới dạng DataFrame và (4) ghi DataFrame vào tệp. (Tùy chọn trong shell) pip install pandas