endpoint-hitter

command module
v0.0.0-...-a17e4d7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 19, 2025 License: MIT Imports: 13 Imported by: 0

README

Endpoint Hitter

Introduction

Small application that is able to hit in parallel a requested endpoint, having an uuid as a path variable. The endpoint, the methode type, the number of expected parallel requests, and the authentication credentials can be given as application parameters.

The response status for the uuid and the generated transaction id will be logged in a separate file. Other application related logs will be sent to stdout.

For example:

We can execute a series of POST requests to the endpoint: https://{env-domain}/__post-publication-combiner/{uuid}, for the uuids read from {uuid_file_name}.txt (uuid.txt being the default).

Installation

Download the source code, dependencies:

go build .

./endpoint-hitter [--help]

Deployment in k8s as a job

If you want to run it in k8s first you need to build a docker image:

docker build -t coco/endpoint-hitter:latest .

and push it in docker hub (Note: you need to login to docker before you could push images):

docker push coco/endpoint-hitter:latest

then make the necessary changes in ./deployment/job.yaml and deploy the job:

kubectl apply -f ./deployment/job.yaml

when the job is done you can delete it via:

kubectl delete job endpoint-hitter

Settings

When reindex in Enriched Content Store images or clips, you can run with settings:

THROTTLE = 3
WAIT_AFTER_THROTTLE = 10

But when you reindex articles, they bring with them reindexing of the whole tree including annotations. So you need less load to the topic and could run with:

THROTTLE = 2
WAIT_AFTER_THROTTLE = 50

This way you produce 40-50 article events per second. You may also split articles in batches and clean the kafka topic after every successful batch. This would reduce used space in MSK kafka service.

If you run the service to reindex enriched-content-store you may create alerts on Post-Publication-Combiner service, because the load generated by the service sometimes result in "503 Service Unavailable" or "500 Internal Server Error" responses, and related logs in Post-Publication-Combiner trigger an alert. So you must tell to OpsCops team you start the reindexing process. The endpoint-hitter has a retry mechanism and will retry these UUIDs three times before give up. Usually they pass the second time.

You can monitor the events in the kafka topic to assure all the UUIDs has been processed by Post-Publication-Combiner in the end and are sent to the topic for the enriched-content-rw-postgres-reindexer process to reindex them.

Documentation

The Go Gopher

There is no documentation for this package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL