Skip to content

Instantly share code, notes, and snippets.

from fastai import torch_core, layers
from fastai.basic_data import *
from fastai.core import *
from fastai.layers import embedding
from fastai.basic_train import Learner
from torch import nn, optim, as_tensor, Tensor
import torch
import logging
def tabularexperiment_learner(data:DataBunch, layers:Collection[int], emb_szs:Dict[str,int]=None, metrics=None,

Export fast.ai models to ONNX

fast.ai is an amazing library, allowing us to use a vast array of pre-trained models, but it also can be used as a foundation for new models.

The fast.ai tabular model is a great model that uses:

  • an embedding layer for representing categorical features
  • parametrized number of hidden layers modeling the continuous features
  • applies batch normalization, droput and weight decay to regularize the model (prevents overfitting and allows the model to be trained faster)

The code is so simple, that it takes an small amount of time to check it out. After reading the code, I wanted to make my "own" version of the tabular model. Mainly I wanted to try a new way to compose the hidden layers. This model uses Batch Norm + Linear + Dropout + Activation Function (ReLU), but some papers claim that the Activation Function should be placed before the Batch Norm the Dropout layers (activation functions used to be zero ce

@franperezlopez
franperezlopez / langchain_otel.ipynb
Last active February 13, 2024 09:11
Open Telemetry for Langchain
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.