Goals: Add links that are reasonable and good explanations of how stuff works. No hype and no vendor content if possible. Practical first-hand accounts of models in prod eagerly sought.

Twitter has released the official API v2 endpoint for the bookmark feature. https://twittercommunity.com/t/build-with-bookmarks-on-the-twitter-api-v2/168804/
The following descriptions are or will soon be no longer useful; I suggest using the new official API.
"""Creating thread safe and managed sessions using SQLAlchemy. | |
The sessions that are created are expected to be: | |
- thread safe | |
- handle committing | |
- handle rolling back on errors | |
- handle session removal/releasing once context or thread is closed. | |
Author: Nitish Reddy Koripalli | |
License: MIT |
import { mapState } from 'vuex' | |
export default function mapStatesTwoWay (namespace, states, updateCb) { | |
const mappedStates = mapState(namespace, states) | |
const res = {} | |
for (const key in mappedStates) { | |
res[key] = { | |
set (value) { | |
updateCb.call(this, { [key]: value }) | |
}, |
By Emily Gill and Amber Rivera
The Pipeline
constructor from sklearn allows you to chain transformers and estimators together into a sequence that functions as one cohesive unit. For example, if your model involves feature selection, standardization, and then regression, those three steps, each as it's own class, could be encapsulated together via Pipeline
.
<? | |
// | |
// [ BUY BTC & ETH DAILY ON BITSTAMP ] | |
// by @levelsio | |
// | |
// 2017-08-23 | |
// | |
// 1) buy $40/day BTC | |
// 2) buy $10/day ETH | |
// |
REGIONS = { | |
'Auvergne-Rhône-Alpes': ['01', '03', '07', '15', '26', '38', '42', '43', '63', '69', '73', '74'], | |
'Bourgogne-Franche-Comté': ['21', '25', '39', '58', '70', '71', '89', '90'], | |
'Bretagne': ['35', '22', '56', '29'], | |
'Centre-Val de Loire': ['18', '28', '36', '37', '41', '45'], | |
'Corse': ['2A', '2B'], | |
'Grand Est': ['08', '10', '51', '52', '54', '55', '57', '67', '68', '88'], | |
'Guadeloupe': ['971'], | |
'Guyane': ['973'], | |
'Hauts-de-France': ['02', '59', '60', '62', '80'], |
I have been an aggressive Kubernetes evangelist over the last few years. It has been the hammer with which I have approached almost all my deployments, and the one tool I have mentioned (shoved down clients throats) in almost all my foremost communications with clients, and it was my go to choice when I was mocking my first startup (saharacluster.com).
A few weeks ago Docker 1.13 was released and I was tasked with replicating a client's Kubernetes deployment on Swarm, more specifically testing running compose on Swarm.
And it was a dream!
All our apps were already dockerised and all I had to do was make a few modificatons to an existing compose file that I had used for testing before prior said deployment on Kubernetes.
And, with the ease with which I was able to expose our endpoints, manage volumes, handle networking, deploy and tear down the setup. I in all honesty see no reason to not use Swarm. No mission-critical feature, or incredibly convenient really nice to have feature in Kubernetes that I'm go
## Useful Commands | |
Get kubectl version | |
kubectl version | |
Get cluster info: |