""" | |
MIT License | |
Copyright (c) 2017 Michał Bultrowicz | |
Permission is hereby granted, free of charge, to any person obtaining a copy | |
of this software and associated documentation files (the "Software"), to deal | |
in the Software without restriction, including without limitation the rights | |
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
copies of the Software, and to permit persons to whom the Software is |
########### | |
# Checks if the given cask needs to be upgraded. | |
# | |
#!/bin/bash | |
set -o pipefail | |
source /etc/profile | |
if [ $# -eq 0 ]; then | |
ARGS=($(brew cask list)) | |
else |
from keras.layers import Conv2D, BatchNormalization, Input, GlobalAveragePooling2D, Dense | |
from keras.models import Model | |
from keras.layers.advanced_activations import LeakyReLU | |
# function for building the discriminator layers | |
def build_discriminator(start_filters, spatial_dim, filter_size): | |
# function for building a CNN block for downsampling the image | |
def add_discriminator_block(x, filters, filter_size): | |
x = Conv2D(filters, filter_size, padding='same')(x) |
"""A set of helpers for reversing urls, similar to Django ``reverse``. | |
Usage: | |
.. code:: python | |
@router.get("/class/{class_id}") | |
async def get_class(request: Request, class_id: int = Path(...)): | |
student_route = get_route(request.app, "list_students") | |
class_students_url = URLFactory(student_route).get_path(class_id=class_id) |
- Web Wormhole https://webwormhole.io/ https://github.com/saljam/webwormhole
- ToffeeShare https://toffeeshare.com/
- FilePizza https://file.pizza/
ShareDrop sharedrop.io https://github.com/szimek/sharedrop(SOLD, not recommended, use one of the forks)A clone SnapDrop snapdrop.net https://github.com/RobinLinus/snapdrop(SOLD, not recommended, use one of the forks)- A fork PairDrop https://pairdrop.net/ https://github.com/schlagmichdoch/pairdrop
- Instant.io https://instant.io/
- FileTC https://file.tc/
- Open linux env
To start with I am using Multipass.
https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html#build-host-packages
$> sudo apt install gawk wget git diffstat unzip texinfo gcc build-essential chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 python3-sphinx libegl1-mesa libsdl1.2-dev python3-subunit mesa-common-dev zstd liblz4-tool file locales libacl1
Extracting financial disclosure reports and police blotter narratives using OpenAI's Structured Output
tl;dr this demo shows how to call OpenAI's gpt-4o-mini model, provide it with URL of a screenshot of a document, and extract data that follows a schema you define. The results are pretty solid even with little effort in defining the data — and no effort doing data prep. OpenAI's API could be a cost-efficient tool for large scale data gathering projects involving public documents.
OpenAI announced Structured Outputs for its API, a feature that allows users to specify the fields and schema of extracted data, and guarantees that the JSON output will follow that specification.
For example, given a Congressional financial disclosure report, with assets defined in a table like this: