It's every bit as complicated to build software against as Apache Kafka and reasoning about repartioning, retry, and failover is a huge pain and only matters if you have TB/s of data. Kinesis Firehose will happily push many, many MB/s or maybe even GB/s without the complexity.
Kinesis Firehose is pay-by-the-drink (price per byte) and pretty darn cheap. Real Kinesis (price per hour) gets expensive fast, since it involves provisioned infrastructure. Ditto Kinesis Data Analytics. One more reason not to use it.
If you find yourself writing a Kinesis Firehose consumer, run screaming into the night. Here is the first half of the steps to connect to Kinesis and consume data:
The KCL acts as an intermediary between your record processing logic and Kinesis Data Streams. The KCL performs the following tasks: