My program needs to read csv files which may have 1,2 or 3 columns, and it needs to modify its behaviour accordingly. Is there a simple way to check the number of columns without "consuming" a row before the iterator runs? The following code is the most elegant I could manage, but I would prefer to run the check before the for loop starts:
import csv
f = 'testfile.csv'
d = '\t'
reader = csv.reader[f,delimiter=d]
for row in reader:
if reader.line_num == 1: fields = len[row]
if len[row] != fields:
raise CSVError["Number of fields should be %s: %s" % [fields,str[row]]]
if fields == 1:
pass
elif fields == 2:
pass
elif fields == 3:
pass
else:
raise CSVError["Too many columns in input file."]
Edit: I should have included more information about my data. If there is only one field, it must contain a name in scientific notation. If there are two fields, the first must contain a name, and the second a linking code. If there are three fields, the additional field contains a flag which specifies whether the name is currently valid. Therefore if any row has 1, 2 or 3 columns, all must have the same.
So you are working on a number of different data analytics projects, and as part of some of them, you are bringing data in from a CSV file.
One area you may want to look at is How to Compare Column Headers in CSV to a List in Python, but that could be coupled with this outputs of this post.
As part of the process if you are manipulating this data, you need to ensure that all of it was loaded without failure.
With this in mind, we will look to help you with a possible automation task to ensure that:
[A] All rows and columns are totalled on loading of a CSV file.
[B] As part of the process, if the same dataset is exported, the total on the export can be counted.
[C] This ensures that all the required table rows and columns are always available.
Python Code that will help you with this
So in the below code, there are a number of things to look at.
Lets look at the CSV file we will read in:
In total there are ten rows with data. The top row is not included in the count as it is deemed a header row. There are also seven columns.
This first bit just reads in the data, and it automatically skips the header row.
import pandas as pd
df = pd.read_csv["csv_import.csv"] #===> reads in all the rows, but skips the first one as it is a header.
Output with first line used:
Number of Rows: 10
Number of Columns: 7
Next it creates two variables that count the no of rows and columns and prints them out.
Note it used the df.axes to tell python to not look at the individual cells.
total_rows=len[df.axes[0]] #===> Axes of 0 is for a row
total_cols=len[df.axes[1]] #===> Axes of 1 is for a column
print["Number of Rows: "+str[total_rows]]
print["Number of Columns: "+str[total_cols]]
And bringing it all together
import pandas as pd
df = pd.read_csv["csv_import.csv"] #===> reads in all the rows, but skips the first one as it is a header.
total_rows=len[df.axes[0]] #===> Axes of 0 is for a row
total_cols=len[df.axes[1]] #===> Axes of 0 is for a column
print["Number of Rows: "+str[total_rows]]
print["Number of Columns: "+str[total_cols]]
Output:
Number of Rows: 10
Number of Columns: 7
In summary, this would be very useful if you are trying to reduce the amount of manual effort in checking the population of a file.
As a result it would help with:
[A] Scripts that process data doesn’t remove rows or columns unnecessarily.
[B] Batch runs who know the size of a dataset in advance of processing can make sure they have the data they need.
[C] Control logs – databases can store this data to show that what was processed is correct.
[D] Where an automated run has to be paused, this can help with identifying the problem and then quickly fixing.
[E] Finally if you are receiving agreed data from a third party it can be used to alert them of too much or too little information was received.
Here is another post you should read!
How to change the headers on a CSV file
View Discussion
Improve Article
Save Article
View Discussion
Improve Article
Save Article
CSV [Comma Separated Values] is a simple fileformat used to store tabular data, such as a spreadsheet or database. A CSV file stores tabular data [numbers and text] in plain text. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format.
In this article, we are going to discuss various approaches to count the number of lines in a CSV file using Python.
We are going to use the below dataset to perform all operations:
Python3
import
pandas as pd
results
=
pd.read_csv[
'Data.csv'
]
print
[results]
Output:
To count the number of lines/rows present in a CSV file, we have two different types of methods:
- Using len[] function.
- Using a counter.
Using len[] function
Under this method, we need to read the CSV file using pandas library and then use the len[] function with the imported CSV file, which will return an int value of a number of lines/rows present in the CSV file.
Python3
import
pandas as pd
results
=
pd.read_csv[
'Data.csv'
]
print
[
"Number of lines present:-"
,
len
[results]]
Output:
Using a counter
Under this approach, we will be initializing an integer rowcount to -1 [not 0 as iteration will start from the heading and not the first row]at the beginning and iterate through the whole file and incrementing the rowcount by one. And in the end, we will be printing the rowcount value.
Python3
rowcount
=
0
for
row
in
open
[
"Data.csv"
]:
rowcount
+
=
1
print
[
"Number of lines present:-"
, rowcount]
Output: