I am in favor of the fact that the handle keyword has the ability to chain, as this allows for the ability to, for example, print variables that were used previously that are the actual root cause of the error instead of just dealing with the symptoms of the problem.
tldr: go build only produces an executable for main packages. Check the package name of your main file is main.
I am an idiot. Its only through mistakes that you learn tho. I have recently been running into issues with my new golang projects. ONLY the new ones. Whenever I try to make a new project, it would always have issues. The biggest issue is the fact that "go build" would not produce an executable without "-o main.exe" as arguments. When an executable was produced, if I ran it I got
$ ./main.exe
./main.exe: line 1: syntax error near unexpected token \`newline\'
./main.exe: line 1: \`!<arch>\'
writing this for search engine indexing more than anything. TLDR: kubernetes cluster DNS messed up. Had to rebuild by cluster
I recently ran into an issue with my kubernetes cluster not being able to run the container that I was building. This container was a relativly simple golang program. do a get request, parse the json, insert into remote database.
when running in my cluster (Self Rolled High Avalibility if that matters), the program panicked on a x509 error. specifically x509: certificate is valid for pfSense-5d68c9b017846, not api.pathofexile.com
openapi: 3.0.0 | |
info: | |
version: "0.0.1" | |
title: StreemTech API | |
description: A Starter StreemTech API that contains different status codes etc. and the basics of the api as is created so far. | |
contact: | |
name: API Support | |
url: http://wiki.streem.tech/api | |
email: [email protected] | |
license: |
TLDR: add {withCredentials: true} to the options sent in the request. In angular I had to create an interceptor to do the job. | |
I have been fighting with my server, attempting to send a UUID with my requests to give my non-logged in clients some kind of persistance. I started with passing it as a header, but because one of my requests uses web-sockets, I cant use headers for everything. This means that I have to use a cookie for the requests. I would have rather used the header, but whatever. | |
Storing the cookie is important anyway because the UUID should be stored between requests. As such, on every request, the UUID is checked, and if it doesent exist, the cookie is created. The header is then added to the request where needed. | |
For most of the calls, I had already created an interceptor that would generate a header to add to the calls. Because websockets dont keep headers (WHY was that a decision made when you literally upgrade the protocol I DONT know, but I digress) I had to use a cookie, or come up wit |
package main | |
import ( | |
"fmt" | |
"sync" | |
"time" | |
"golang.org/x/time/rate" | |
) |
apiVersion: v1 | |
kind: ConfigMap | |
metadata: | |
# any name can be used; Velero uses the labels (below) | |
# to identify it rather than the name | |
name: change-storage-class-config | |
# must be in the velero namespace | |
namespace: velero | |
# the below labels should be used verbatim in your | |
# ConfigMap. |
One of the things that I was sure must exist, but had, until this point, never found anywhere was a distributed time delay. That is, a way to add data to a bucket/queue/dataset/what have you, that would then allow for work to be done on that data after a pre-determined delay. A delay that was independent to each item (the ability for different delays on each item automatically allows for the same delay on each item)
When working in a single service, using a sleep command is often good enough, but when you have to wait longer than about a minute, you should be using a separate (ideally distributed) tool.
The reason for this, in my opinion, is that you should not trust your service to not be shut down in the time between the event starts and the event ends.
This gist explains, roughly, how to create a time delay queue in RabbitMQ (And thus AMQP in general) as well as includes code required to do so in terraform.
tldr: check to make sure you are pointing your log requests to the http input port you set in the pipeline instad of the logstash api port
Some days you just want to hit your head against a wall. I have been having issues trying to figure out why my logstash logging endpoint was not working for the last month or two. (its been lower priority among other things.)
I recently migrated my stuff to a new kubernetes environment, and wanted to try out ECK to deploy my elasticsearch logstash etc. stuff. (Not that worth over just using bitnami helm unless you are deploying a LOT of elasticsearch clusters)
The one issue I was having was that I was unable to figure out how to get my logstash instance working again. All of my logs were 404ing.
After a lot of digging and port forwarding shenanigans in kubernetes, I FINALLY noticed the following log line when restarting my logstash instance.