Skip to content

Instantly share code, notes, and snippets.

@TheLinuxGuy
Last active April 5, 2025 22:10
Show Gist options
  • Save TheLinuxGuy/e8c85e59226014087159c5d36c0a1272 to your computer and use it in GitHub Desktop.
Save TheLinuxGuy/e8c85e59226014087159c5d36c0a1272 to your computer and use it in GitHub Desktop.
Export your TeslaFi user data to CSV format that TeslaMate can easily import (fixes bugs in the official script)
# Author: Giovanni Mazzeo (github.com/thelinuxguy)
# Script fetches your TeslaFi.com user data to allow importing into TeslaMate.
# Updated 04/05/2025 to include more fields to normalize based on comments in gist.
# My script fixes a couple bugs and issues seen by other people running the older script:
# 1) "Invalid CSV delimiter" issue: https://github.com/teslamate-org/teslamate/issues/4569
# 2) "battery_level" column integeter data change in 2024. https://github.com/teslamate-org/teslamate/issues/4477
# You can thank me by buying me a coffee :)
# https://buymeacoffee.com/thelinuxguy
import requests
import csv
from io import StringIO
from lxml.html import fromstring
username = 'username'
password = 'password'
years = [2020, 2021, 2022, 2023, 2024, 2025] # array of years you want to export
months = [1,2,3,4,5,6,7,8,9,10,11,12] # I assume all the months, up to you
cookie = ''
# Set the proper delimiter that validate_csv.py expects
CSV_DELIMITER = ',' # Change this if your validator expects a different delimiter
def login():
url = "https://teslafi.com/userlogin.php"
response = requests.request("GET", url, headers={}, data={})
cookies = ""
for key in response.cookies.keys():
this_cookie = key + "=" + response.cookies.get(key)
if cookies == "":
cookies = this_cookie
else:
cookies += "; " + this_cookie
token = fromstring(response.text).forms[0].fields['token']
global cookie
cookie = cookies
payload = {'username': username,'password': password,'remember': '1','submit': 'Login','token': token}
headers = {"Cookie": cookies}
l = requests.request("POST", url, headers=headers, data=payload)
return True
def getdata(m,y):
url = "https://teslafi.com/exportMonth.php"
headers = {'Content-Type': 'application/x-www-form-urlencoded','Cookie': cookie}
response = requests.request("POST", url, headers=headers, data=pl(m,y))
return response
def detect_delimiter(text):
"""Detects the most likely delimiter in the CSV data"""
if not text or '\n' not in text:
return ','
# Sample the first line to detect delimiter
first_line = text.split('\n', 1)[0]
delimiters = [(',', first_line.count(',')),
(';', first_line.count(';')),
('\t', first_line.count('\t'))]
# Sort by frequency, highest first
delimiters.sort(key=lambda x: x[1], reverse=True)
# Return the most common delimiter, or comma if none found
return delimiters[0][0] if delimiters[0][1] > 0 else ','
def normalize_field(rows, header, field_name):
"""
Normalize field values to integers without decimal points:
- Always convert to integer representation
- If decimal part < 0.50, round down
- If decimal part >= 0.50, round up
Args:
rows: List of CSV rows (lists)
header: List of column names
field_name: Name of the field to normalize
Returns:
Tuple of (modified_rows, normalization_count)
"""
# Find the index of the specified field column
try:
field_index = header.index(field_name)
except ValueError:
# If field column doesn't exist, return original rows
return rows, 0
normalization_count = 0
# Iterate through all rows
for i, row in enumerate(rows):
# Skip if row is too short or field is empty
if len(row) <= field_index or not row[field_index].strip():
continue
try:
# Try to convert the field to a float
value = float(row[field_index])
# Get the integer value (either rounded up or down based on decimal part)
if value - int(value) < 0.5:
new_value = int(value) # Round down
else:
new_value = int(value) + 1 # Round up
# Convert to string representation of integer
new_value_str = str(new_value)
# Only count as normalization if we actually changed the value
if row[field_index] != new_value_str:
row[field_index] = new_value_str
normalization_count += 1
except (ValueError, TypeError):
# Skip if conversion fails
continue
return rows, normalization_count
def normalize_battery_level(rows, header):
"""
Normalize battery_level values to integers without decimal points.
This is a wrapper around normalize_field for backward compatibility.
Args:
rows: List of CSV rows (lists)
header: List of column names
Returns:
Tuple of (modified_rows, normalization_count)
"""
return normalize_field(rows, header, 'battery_level')
def savefile(response, m, y):
try:
# Detect what delimiter the API is using
input_delimiter = detect_delimiter(response.text)
# Read the CSV data with the detected delimiter
csv_data = StringIO(response.text)
reader = csv.reader(csv_data, delimiter=input_delimiter)
rows = list(reader)
# Extract the header and data rows
if not rows:
print(f"Skipped creating {fname(m,y)} for year {y} and month number {m} due to lack of data from TeslaFi.")
return
header = rows[0]
data_rows = rows[1:]
# Check if there are any data rows
if not data_rows:
print(f"Skipped creating {fname(m,y)} for year {y} and month number {m} due to lack of data from TeslaFi.")
return
# Normalize fields
fields_to_normalize = ['battery_level', 'charger_actual_current', 'charger_voltage']
for field in fields_to_normalize:
data_rows, normalization_count = normalize_field(data_rows, header, field)
if normalization_count > 0:
print(f"Detected `{field}` column malformed, {normalization_count} rows of data have been autocorrected")
# Write the standardized CSV with the correct delimiter
with open(fname(m,y), "w", newline='', encoding='utf-8') as file:
writer = csv.writer(file, delimiter=CSV_DELIMITER, quoting=csv.QUOTE_MINIMAL)
writer.writerow(header)
writer.writerows(data_rows)
print(f"Saved: {fname(m,y)}")
except Exception as e:
print(f"Error processing CSV: {str(e)}")
return
def fname(m,y):
return("TeslaFi" + str(m) + str(y) + ".csv")
def pl(m,y):
url = 'https://teslafi.com/export2.php'
response = requests.request("GET", url, headers={"Cookie": cookie})
magic = fromstring(response.text).forms[0].fields['__csrf_magic']
return('__csrf_magic=' + magic + '&Month=' + str(m) + '&Year=' + str(y))
def go():
login()
for year in years:
for month in months:
print(f"Processing: {month}/{year}")
d = getdata(month, year)
savefile(d, month, year)
go()
@jeepyboi
Copy link

jeepyboi commented Mar 8, 2025

I'm getting an error with this code.

... Line 11, in
import requests
ModuleNotFoundError: No module named 'requests'

Sorry if this is a noob mistake. I'm not a programmer.

@TheLinuxGuy
Copy link
Author

I'm getting an error with this code.

... Line 11, in
import requests
ModuleNotFoundError: No module named 'requests'

Sorry if this is a noob mistake. I'm not a programmer.

This python script needs you to either already have on your system or install requests module, maybe others.

You should be able to do pip install requests and then try again. https://www.geeksforgeeks.org/how-to-install-requests-in-python-for-windows-linux-mac/

@jeepyboi
Copy link

jeepyboi commented Mar 8, 2025

Thank you for the reply. I got it to work. This was the first time I have used python and had to install requests and lxml.

Again, thanks for helping with such a basic request.

@JGLord
Copy link

JGLord commented Mar 30, 2025

@TheLinuxGuy : This script really has a lot of potential. In fact, I used it to load a large amount of data, but I detected two small problems. 2 additional fields need to be normalized: charger_actual_current and charger_voltage. These fields are Smallint in Teslamate and TeslaFi sends them with many decimals.

RAW data exemple:

<style> </style>
data_id Date battery_range battery_current charger_actual_current charger_voltage
456798 2025-03-07 08:01:10 294.62 -7.0000001 15.0000002 238.524
456799 2025-03-07 08:02:10 294.62 0.20000000 34.0000005 236.379
456800 2025-03-07 08:03:11 294.62 0.20000000 35.0000005 235.29
456801 2025-03-07 08:04:10 294.62 0.30000000 35.0000005 238.128
456802 2025-03-07 08:05:11 294.62 -0.3000000 35.0000005 238.491
456803 2025-03-07 08:06:11 294.62 0.20000000 35.0000005 238.722
456804 2025-03-07 08:07:11 294.62 -0.4000000 35.0000005 238.689

Teslamate logs:
image

Could you please add these normalizations?

Thanks

@JGLord
Copy link

JGLord commented Apr 5, 2025

By the way, I also opened a TeslaFi ticket telling them that their raw data no longer has Charger Power. The problem has been confirmed and should be fixed this month. This problem does not cause import errors (the data will simply be missing in TeslaMate after import).
https://teslafi.zendesk.com/hc/en-us/requests/104

@TheLinuxGuy
Copy link
Author

@JGLord I already deleted my TeslaFi account and cannot test, so you will need to test this version yourself and report back if it works. Don't feel comfortable updating this gist yet until its verified working and I have no way to test.

Try this: https://gist.github.com/TheLinuxGuy/46ff652ba66201da87d268f605e9ad1e

@JGLord
Copy link

JGLord commented Apr 5, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment