Skip to content

Instantly share code, notes, and snippets.

@samgrover
Created April 16, 2019 17:51
Show Gist options
  • Save samgrover/26939e0b68a26b7fe03a7a4bd37bdcf6 to your computer and use it in GitHub Desktop.
Save samgrover/26939e0b68a26b7fe03a7a4bd37bdcf6 to your computer and use it in GitHub Desktop.
Parse the Swarm/Foursquare exported data and create entries in Day One using their command line tool.
#!/usr/bin/env python
# Parse the Swarm/Foursquare exported data and create entries in Day One using their command line tool.
# Day One command line tool available at: http://dayoneapp.com/support/CLI
import sys
import json
import requests
import subprocess
import time
from datetime import timezone, timedelta, datetime
# Foursquare API Keys. Docs at https://developer.foursquare.com/docs
# Needed to get venue coordinates, but only if you set PROCESS_VENUE_COORDS to True.
PROCESS_VENUE_COORDS = False
FOURSQUARE_CLIENT_ID = "YOUR_FOURSQUARE_CLIENT_ID"
FOURSQUARE_CLIENT_SECRET = "YOUR_FOURSQUARE_CLIENT_SECRET"
# Indicates which entries to process for this run.
# Only needed if you're processing venue coords on a limited API plan, or have some other constraint.
START = 0
END = 100
# The name of the journal you want to add the entries into.
JOURNAL_NAME = "YOUR_DAY_ONE_JOURNAL_NAME"
# Entry tags
TAGS = ["Foursquare", "Swarm"]
# A placeholder for entries that have no location/venue name.
NO_NAME_PLACEHOLDER = r"¯\_(ツ)_/¯"
ERROR_MARKER = "<<<<>>>>"
# Set to True to execute the commands to create entries. Otherwise they are just printed out.
EXECUTE_COMMAND = False
def get_JSON(filename):
with open(filename) as json_data:
d = json.load(json_data)
return d
def related_item_url_from_checkin_id(checkin_id):
return f"https://www.swarmapp.com/checkin/{checkin_id}"
def photos_with_checkin_id(incoming_checkin_id):
matching_photos = []
for a_photo_item in photo_items:
related_item_url = a_photo_item["relatedItemUrl"]
checkin_id = related_item_url.split('/')[-1]
if checkin_id == incoming_checkin_id:
a_processed_photo = {
"checkinId": checkin_id,
"fullUrl": a_photo_item["fullUrl"],
"width": a_photo_item["width"],
"height": a_photo_item["height"]
}
matching_photos.append(a_processed_photo)
return matching_photos
def get_lat_lng(location):
return (location["lat"], location["lng"])
def get_venue_location(venue_id):
# Uncomment the following line if you want to rate limit the call for venue info
# time.sleep(2)
coords = (0, 0)
if PROCESS_VENUE_COORDS is False:
return coords
url = f'https://api.foursquare.com/v2/venues/{venue_id}'
params = dict(
v='20190323',
client_id=FOURSQUARE_CLIENT_ID,
client_secret=FOURSQUARE_CLIENT_SECRET,
)
resp = requests.get(url=url, params=params)
if resp.status_code == 200:
data = json.loads(resp.text)
# print(data)
meta = data["meta"]
if meta["code"] == 200:
coords = get_lat_lng(data["response"]["venue"]["location"])
else:
print(ERROR_MARKER)
print(f"Error retrieving details for venue: {venue_id}")
print(json.dumps(meta, indent=4))
print(ERROR_MARKER)
return coords
def text_for_name(name):
text = f"I'm at {name}"
return text
def add_shout(item, text):
shout = ""
if "shout" in item:
shout = item["shout"]
text += f"\n{shout}"
return text
return text
if len(sys.argv) != 3:
print("This script requires the following arguments:")
print("swarm-day-one-import.py <checkins.json> <photos.json>")
exit(0)
checkins = get_JSON(sys.argv[1])
checkin_items = checkins['items']
TOTAL = checkins['count']
photos = get_JSON(sys.argv[2])
photo_items = photos['items']
# Pre process all the photos and add them to the checkins
for a_checkin_item in checkin_items:
if a_checkin_item["type"] == "checkin":
checkin_id = a_checkin_item["id"]
matching_photos = photos_with_checkin_id(checkin_id)
if len(matching_photos) > 0:
a_checkin_item["photos"] = matching_photos
for item in checkin_items[START:END]:
text = ''
path_to_photos = []
lat = 0
lng = 0
if item["type"] == "checkin":
if "venue" in item:
venue = item["venue"]
name = venue["name"]
text = text_for_name(name)
text = add_shout(item, text)
photos = []
if "photos" in item:
photos = item["photos"]
for a_photo in photos:
photo_url = a_photo["fullUrl"]
filename = photo_url.split('/')[-1]
path_to_filename = f"images/{filename}"
path_to_photos.append(path_to_filename)
print(path_to_filename)
(lat, lng) = get_venue_location(venue["id"])
else:
# These don't have venue or location.
text = text_for_name(NO_NAME_PLACEHOLDER)
text = add_shout(item, text)
elif item["type"] == "venueless":
# Process these ones that have location instead of venue. type = venueless.
# None of these have photos in my data set so we will just ignore processing that in here.
location = item["location"]
name = location["name"]
text = text_for_name(name)
text = add_shout(item, text)
(lat, lng) = get_lat_lng(location)
elif item["type"] == "shout":
location = item["location"]
text = text_for_name(NO_NAME_PLACEHOLDER)
text = add_shout(item, text)
timestamp = datetime.fromtimestamp(item["createdAt"], timezone.utc)
timestr = '--isoDate=' + timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
dayone_command = ['dayone2', 'new', text, '--journal', JOURNAL_NAME, timestr]
tz = timezone(timedelta(minutes=item["timeZoneOffset"]))
dayone_command.extend(['-z', str(tz)])
if len(TAGS) > 0:
dayone_command.extend(['-t'] + TAGS)
if len(path_to_photos) > 0:
dayone_command.extend(['--photos'] + path_to_photos)
if lat != 0 and lng != 0:
dayone_command.extend(['--coordinate', str(lat), str(lng)])
print(" ".join(dayone_command))
if EXECUTE_COMMAND is True:
subprocess.call(dayone_command)
print(f"Start = {START}")
print(f"End = {END}")
print(f"Total = {TOTAL}")
print(f"Remaining = {TOTAL - END}")
@jamrolu
Copy link

jamrolu commented Mar 18, 2020

Did you ever get this working? Hitting the same error - complete novice here just looking for a complete journal and way out of my depth 😆

@otaviocc
Copy link

@ghost, @jamrolu, you have to use a more recent python version. I use python 3.9 here. If you have brew installed, you can use it to install python. Something like

brew install [email protected]

You also need requests. So after installing [email protected] you have to run:

pip3.9 install requests

if you want to import geo location, than you'll have to update https://api.foursquare.com/v2/venues/{venue_id} from v2 to v3 and pass Authorization in the header.

This patch should do the trick:

--- swarm-day-one-import.py	2022-01-20 10:38:27.000000000 +0100
+++ swarm-day-one-import.new.py	2022-01-20 10:37:56.000000000 +0100
@@ -13,8 +13,7 @@
 # Foursquare API Keys. Docs at https://developer.foursquare.com/docs
 # Needed to get venue coordinates, but only if you set PROCESS_VENUE_COORDS to True.
 PROCESS_VENUE_COORDS = False
-FOURSQUARE_CLIENT_ID   = "YOUR_FOURSQUARE_CLIENT_ID"
-FOURSQUARE_CLIENT_SECRET = "YOUR_FOURSQUARE_CLIENT_SECRET"
+FOURSQUARE_API_TOKEN = ""
 
 # Indicates which entries to process for this run.
 # Only needed if you're processing venue coords on a limited API plan, or have some other constraint.
@@ -59,7 +58,7 @@
     return matching_photos
 
 def get_lat_lng(location):
-    return (location["lat"], location["lng"])
+    return (location["latitude"], location["longitude"])
 
 def get_venue_location(venue_id):
     # Uncomment the following line if you want to rate limit the call for venue info
@@ -69,21 +68,18 @@
     if PROCESS_VENUE_COORDS is False:
         return coords
     
-    url = f'https://api.foursquare.com/v2/venues/{venue_id}'
-
-    params = dict(
-        v='20190323',
-        client_id=FOURSQUARE_CLIENT_ID,
-        client_secret=FOURSQUARE_CLIENT_SECRET,
-    )
-
-    resp = requests.get(url=url, params=params)
+    url = f'https://api.foursquare.com/v3/places/{venue_id}'
+    headers = {
+        "Accept": "application/json",
+        "Authorization": FOURSQUARE_API_TOKEN
+    }
+    resp = requests.request("GET", url, headers=headers)
     if resp.status_code == 200:
         data = json.loads(resp.text)
         # print(data)
-        meta = data["meta"]
-        if meta["code"] == 200:
-            coords = get_lat_lng(data["response"]["venue"]["location"])
+        geocodes = data["geocodes"]
+        if "main" in geocodes:
+            coords = get_lat_lng(geocodes["main"])
         else:
             print(ERROR_MARKER)
             print(f"Error retrieving details for venue: {venue_id}")
@@ -103,6 +99,14 @@
         return text
     return text
 
+def add_url(item, text):
+    url = ""
+    if "url" in item:
+        url = item["url"]
+        text += f"\n[Open on Foursquare]({url})"
+        return text
+    return text
+
 if len(sys.argv) != 3:
     print("This script requires the following arguments:")
     print("swarm-day-one-import.py <checkins.json> <photos.json>")
@@ -134,8 +138,10 @@
         if "venue" in item:
             venue = item["venue"]
             name = venue["name"]
+            venue_url = venue["url"]
             text = text_for_name(name)
             text = add_shout(item, text)
+            text = add_url(venue, text)
 
             photos = []
             if "photos" in item:

cc: @samgrover

@johncameron3
Copy link

@samgrover, thank you very much for the script, and @otaviocc thank you for updating it to the v3 API. I'm really excited to potentially be able to merge in over ten years of checkins from Swarm to Day One!

I'm able to import checkins that don't have a picture. The script seems to be failing on checkins with a picture as it does not download them to an ./images folder, or anywhere I can find. What is the expected workflow here? Do I need to define an images folder location somewhere? Does it assume where the script is ran from?

The script will ultimately fail when invoking this summarized portion of the command: dayone2 new --photos images/long_photo_name.jpg

error: File at path (images/long_photo_name.jpg) does not exist.

--

I tried running as sudo and specifying ./images in the code here: path_to_filename = f"images/{filename}" No luck.

If I manually copy a downloaded image into ./images from say https://fastly.4sqi.net/img/general/590x786/long_photo_name.jpg it works!

Any suggestions would be greatly appreciated!

@samgrover
Copy link
Author

@johncameron3 Yea, I think I had a separate script at the time to download images. Sorry that isn't included in here. I stopped using Swarm a while back so I never updated/improved on this.

@samgrover
Copy link
Author

And thanks to @otaviocc for updating it so it remains useful.

@otaviocc
Copy link

@samgrover my pleasure!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment