By reading vastly different styles of text (such as fiction and philosopy), it may be possible to come up with a writing style that takes inspiration from both, resulting in a completely unique writing style. This is useful for artificial data generation.
- Character level neural language model
- regularized by GANs
- leraning by just raw sampling yeilds poor results, so need custom training algorithm (I have a promising one, but not implemented and tested)
- implement and improve custom training loop
NIPS Paper was published that pushed the boundaries of steg using Deep learning autoencoders. I have personally implemented that paper https://github.com/harveyslash/Deep-Steganography.
My custom implementation yields good results. The quality of images can be improved by
- using adversarial learning
- using a more sophisticated loss like SSIM
- implement and improve results from paper using 1. and 2.
Natural queries like 'a brown car parked by the park' should find the images that match the queries best.
Given a query, the model was trained to Generate a representation of the image such that the MSE between generated representation and actual representation of image was low.
Basic implementation was done. The results were somewhat working, but dissatisfactory. The model gets extremely sensitive to certain key words while ignoring other words almost completely.
The issue with this approach is that the generated representation is too global. It will always favour key words like 'car' or 'duck' because these words will coresspond to the highest amount of activations.
The task is to figure out ways to look at different parts of an image individually with respect to the query.