Modern deep transfer learning approaches have mainly focused on learning \emph{generic} feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large- scale unlabeled data and transferring the graphs to downstream tasks. Our
We propose to tackle the problem of end-to-end learning for raw waveforms signals by introducing learnable continuous time-frequency atoms. The derivation of these filters is achieved by first, defining a functional space with a given smoothness order and boundary conditions. From this space, we derive the parametric analytical filters. Their differentiability property allows gradient-based optimization. As such, one can equip any Deep Neural Networks (DNNs) with these filters. This enables us to tackle in a front-end fashion a large scale bird detection task based on the freefield1010 dataset
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| module Style exposing (..) | |
| type CustomColor | |
| = CItem | |
| | CItemHidden | |
| | CUser | |
| | CUserHidden | |
| | Grey | |
| | White | |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import torch | |
| def cum_softmax(t: torch.Tensor) -> torch.Tensor: | |
| # t shape: ..., sm_d, sm_d is the dim to reduce | |
| tmax = t.cummax(dim=-1)[0].unsqueeze(-2) | |
| denominator = (t.unsqueeze(-1) - tmax).exp() | |
| # shape: ..., sm_d, csm_d | |
| numerator = denominator.sum(dim=-2, keepdim=True) | |
| return numerator / denominator |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| html{ | |
| font-size: 16px; | |
| width: 100%; | |
| } | |
| table td, table th { overflow-wrap: anywhere; } | |
| .title>a{ | |
| color: black; | |
| font-size: 1.5rem; | |
| font-family: "charter", "Palatino Linotype", "Source Serif Pro", "Georgia", "serif" | |
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import csv | |
| from collections import Counter | |
| def get_table(m): | |
| invert_counter = {v: i for i, v in Counter(''.join(m)).items()} | |
| digit_table = { | |
| 'e': invert_counter[4], | |
| 'b': invert_counter[6], | |
| 'f': invert_counter[9], | |
| } |
OlderNewer