Last active
April 28, 2018 07:23
-
-
Save sizhky/ac160f83624ae60c7bfd3330b269b48f to your computer and use it in GitHub Desktop.
cse7321 viva questions
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
What is perceptron | |
Fullyconnect ante enti? | |
Shallow vs Deep Learning (Why Deep) | |
Issues with Deep Learning | |
Uses of Unsupervised methods | |
What are auto encoders | |
Loss functions for Regression, Classification | |
What are dropouts and their suggested range | |
Three key advantages of CNNs over NNs | |
Which paratemer influences the depth of feature map | |
What is image augumentation | |
How does RNN work? | |
Formula for Simple RNN? | |
What is CNN1D, how is it useful | |
Which parameters influence output dim of a feature map (kernel size, stride, padding, number of f-maps) | |
How stacking autoencoders work/What are stacked autorencoders | |
How autoencoders can be used as feature generators | |
How are weights initialized | |
Range of learning rates and momentum (2 separate questions) | |
How MLP will help as opposed to perceptron | |
List all activations and range of each activation function | |
Advantage of Relu over Sigmoid | |
How is loss minimized - Gradient Descent | |
Use of Gradient Descent | |
Difference between Contractive and Denoising autencoders | |
What is Local Connectivity, Spatial Arrangement, Parameter Sharing | |
What does pooling achieve (Local translational and rotational invariance) | |
Which is simpler model | |
1 layer with 256 nodes or | |
2 layers with 16 nodes each? | |
How do you justify? | |
Batch Normalization ante enti? | |
What is regularization | |
Regularization Techniques | |
EarlyStopping; WeightDecay; Dropouts; BatchNorm; L1; L2 | |
What are embeddings? | |
Dimension of an embedding matrix | |
########################################################### | |
What are the assumption of Linear Programming | |
What is sensitivity analysis of shadow price | |
Pipeline of genetic algorithm | |
Difference between mutation crossover | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment