Running Google Cloud Endpoint Tests Locally

source: https://commons.wikimedia.org/wiki/File:Esp_LTD_M-1000.jpg

Welcome to another one of my articles where I take notes on what I did to make GCP-related frameworks/tools to do what I want. This time I was trying to setup Clound Endpoint/gRPC to work locally so I can run tests.

Please note that everything here is as of May 2018. GCP sometimes quickly changes/fixes things, so please always do your own research.

My Requirements

  • I want to implement a gRPC server in Go
  • …which is accessed through ESP, controlled by Cloud Endpoints (which is the way to run gRPC servers behind Cloud Endpoints right now)
  • I want to test what’s going on in the local dev environment
  • I want it to be automated as much as possible, so others only need to run go test

Let The Journey Begin

So here goes my notes.

Here’s my previous notes on how to run ESP servers locally.

While I am sure more goes behind the scenes, from a regular Joe user’s viewpoint, Cloud Endpoints is a glorified HTTP proxy and associated metadata/configuration that can be setup from either an OpenAPI definition, or from a Protocol Buffers definition.

By creating a new Endpoints service and a versioned configuration, you can tell GCP what paths are available to your service, what parameters are available, etc etc. This knowledge is then used by a proxy server called Extensible Service Proxy (ESP), which is responsible for driving most of the processing.

ESP is conveniently packaged as a container, so you can easily run it anywhere. You may find some useful information in my article that was previously mentioned.

Running an ESP container requires you to pass it one non-trivial piece of data, which is the service account that the ESP should use when handling requests.

Now, obviously, just creating a service account and passing the file that contains its information is easy. Just do what you need to do on the IAM console:

You probably need to share this information with your team, who want to run the local tests.

While in most situations adding the JSON file that contains the credentials to your repository will cause no problem, you should always be asking yourself “What IF the repository somehow became visible to the public?” I mean, you never know.

So it irks me to add this file as is in a persistent storage, let alone git.

My take on this was to use Cloud KMS.

I took the original serviceaccount.json file, encrypted it using gcloud kms encrypt, and committed that file to our repository. Then, my script that starts the ESP container decrypts this file, and passes it to the container.

You obviously need to give your team proper authorization to decrypt the file (which can be added to your team members from the console), but this way at least you have control over who can perform this operation.

The ESP script will look something like this:

# abbreviated script snippetENCRYPTED_SERVICE_ACCOUNT_FILE=path/to/esp-serviceaccount.json.enc
SERVICE_ACCOUNT_FILE=tmp/serviceaccount.json
# This function just deletes the service account file so
# we don't keep it lying around
function delete_service_account_file {
rm $SERVICE_ACCOUNT_FILE
}
trap delete_service_account_file EXIT
# Use KMS to decrypt the encrypted service account information
gcloud kms decrypt \
--ciphertext-file=$ENCRYPTED_SERVICE_ACCOUNT_FILE \
--plaintext-file=$SERVICE_ACCOUNT_FILE \
--location=global \
--keyring=$YOUR_KEYRING \
--key=$YOUR_KEY
# The "tmp" directory is mounted by the ESP container
# so that ESP can read files from it, including the
# service account file
docker run \
--rm \
--name=$ESP_CONTAINER_NAME \
--publish=$ESP_PORT:$ESP_PORT \
-v tmp:/esp \
gcr.io/endpoints-release/endpoints-runtime:1 \
--service=$SERVICE_NAME \
--version=$CONFIG_ID \
--http_port=$ESP_PORT \
--backend=grpc://$GRPC_HOST:$GRPC_PORT \
--service_account_key=/esp/serviceaccount.json

Once you have this script working, it should be fairly trivial to start the ESP container for your Go tests.

First, you probably should be using TestMain function so that you can setup the server before any other tests are run.

func TestMain(m *testing.M) {
setupESP() // setup
defer tearDownESP() // automatic teardown
os.Exit(m.Run()) // m.Run() runs the tests
}

And in setupESP you just need to use os/exec.

containerName := "esp-test"
cmd := exec.Command(espScriptFile)
cmd.Env = append(os.Environ(), "ESP_CONTAINER_NAME", containerName)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
go cmd.Run()

Here, I’m explicitly passing the container name through the environment variable ESP_CONTAINER_NAME. If you look closely at the script that I previously posted, one of the arguments to the docker run command was an explicit container name. Instead of letting Docker give me a random name, the name is passed so that it’s easier to stop the ESP container upon finishing the tests.

defer func() {
stopCmd := exec.Command("docker", "stop", containerName)
stopCmd.Stdout = os.Stdout
stopCmd.Stderr = os.Stderr
stopCmd.Run()
}()

In the beginning, I’m sure you will misconfigure ESP and/or gRPC. That’s normal, and very expected.

But before you go and whip out whatever debugging-fu that you may have, I’m going to strongly suggest you add something like the following immediately after you start the ESP container. This code basically gives the ESP container 5 seconds to at least start listening for incoming connections.

connStr := " ... expected ESP addr:port to connect to ... "ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
espTicker := time.NewTicker(500 * time.Millisecond)
defer espTicker.Stop()
for loop := true; loop; {
select {
case <-ctx.Done():
return errors.New("failed waiting for ESP port to become available")
case <-espTicker.C:
if conn, err := net.DialTimeout("tcp", connStr, 100*time.Millisecond); err == nil {
conn.Close()
loop = false
}
}
}

See, I have been coding long enough to know that, when you have trouble getting your automated tools that spawns servers, the first thing that you should be checking is if your servers are listening on the correct address.

As trivial as it sounds, it will save you countless hours. Take the advice.

Now that you know that the ESP container is accepting connections, as long as everything else is setup correctly you can start sending JSON requests to the ESP server, and expect to see your gRPC server getting something.

As it were, you are probably not getting your gRPC requests. You screwed up something.

First thing to check is to make sure you have logging enabled in your gRPC server. Use your favorite interceptor: For example, to use the zap logger (available here), you just do something like this:

func GRPCServer(l *zap.Logger) *grpc.Server {
return grpc.NewServer(
grpc_middleware.WithUnaryServerChain(
grpc_zap.UnaryServerInterceptor(l),
),
grpc_middleware.WithStreamServerChain(
grpc_zap.StreamServerInterceptor(l),
),
)
}

If that didn’t help — that is, you see no logs from the gRPC server, maybe you have some misconfiguration in your Cloud Endpoint config, which is causing ESP to act oddly.

ESP automatically feeds data into Stackdriver, so there are going to be logs there to help you, but to be honest I find Stackdriver confusing to use for simple debugging like this.

It would be really nice if we could tell ESP to spit out more stuff, but at the time of writing, I couldn’t find any command line option to do this.

Luckily, ESP is just nginx on steroids. That means as long as we can tweak the nginx configuration file, we should be able to get debug output.

The first way I’m describing here is the more hackish way to do it: Just exec right into the ESP container:

docker exec -it $name-of-your-ESP-container /bin/bash

Install your favorite text editor (or use sed, whatever), and edit /etc/nginx/endpoints/nginx.conf. The line that you want to edit is

error_log stderr warn;

Just change this to the following:

error_log stderr debug;

And then issue kill -HUP 1.

This will force nginx to reload the config, and now that it’s spewing out debug messages, you should have much better idea of what’s going on.

The other way is to use the --nginx_config option. This allows you to specify an alternate template file (the /etc/nginx/endpoints/nginx.conf is dynamically generated when the container is started), where you should be able to change the logging settings to whatever you like.

The template file is located at /etc/nginx/nginx-auto.conf.template : you can grab the contents of that file by issuing the following command:

docker run --rm --entrypoint /bin/cat \
gcr.io/endpoints-release/endpoints-runtime:1 \
/etc/nginx/nginx-auto.conf.template

Using this flag in conjunction with whatever wrapper script seems like a more solid solution, except the help message for this option says:

-n NGINX_CONFIG, --nginx_config NGINX_CONFIG
Use a custom nginx config file instead of the config
template /etc/nginx/nginx-auto.conf.template. If you
specify this option, then all the port options are
ignored.

Now, I haven’t played around enough to figure out what this exactly means, but if it prevents me from specifying the ports to listen on, it would be a show stopper for me.

If you need to constantly be switching debug logging on, you might want to investigate a bit more. I personally only needed to manually start the ESP container a few times to realize my mistakes, so I just hacked it by manually reloading nginx.

Cloud Endpoints allows you to specify that resources need to be protected from unauthorized requests.

If your tests are going through ESP, you are going to have to have proper authorization to access the gRPC service behind it, so we need to configure this as well.

The authorization may come as a JWT token embedded in the Authorization: Bearer header, or it can be an API key in the query string parameter , in the form of http://your-esp-address/path?key=API-KEY

For testing, I believe using an API key is much easier. Head over to the API Credentials console and create a new API key for your tests (remember to limit your API key’s capabilities to just the Cloud Endpoint service that you want to use it for):

Just as the service account information that we talked about, this needs to be shared by your team members. And again, this is sensitive information, so I’d suggest to avoid adding it directly to a persistent storage like git. That means this is where KMS is useful again. Use KMS to encrypt a file that contains this API key, and add it to your repository.

This time around, the component that needs this piece of information is the Go test. You may want to write a function like this to grab the API key:

func getAPIKey() (string, error) {
apiKeyFile := `path/to/raw-apikey.txt`
if _, err := os.Stat(apiKeyFile); err != nil {
encryptedFile := `path/to/encrypted-apikey.enc`
cmd := exec.Command("gcloud", "kms", "decrypt",
"--ciphertext-file="+encryptedFile,
"--plaintext-file="+apiKeyFile,
"--location=global",
"--keyring=your-keyring",
"--key=your-key",
)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return "", errors.Wrapf(err, `failed to decrypt %s to obtain ESP api key`, encryptedFile)
}
}
buf, err := ioutil.ReadFile(apiKeyFile)
if err != nil {
return "", errors.Wrapf(err, `failed to read from %s`, apiKeyFile)
}
return string(buf), nil
}

Then you can add the API key that you got from above to all of your requests to ESP. It would be rather tedious to do u += "?key=" + apiKey to every HTTP call that you may have in your test, I would suggest you create a http.RoundTripper like this:

type espRoundTripper struct {
key string
}
func newESPRoundTripper(key string) http.RoundTripper {
return &espRoundTripper{key: key}
}
func (t *espRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
if r.URL == nil {
return nil, errors.New(`empty URL in HTTP request`)
}
if v := r.URL.Query(); v.Get(`key`) == "" {
v.Set(`key`, t.key)
r.URL.RawQuery = v.Encode()
}
return http.DefaultClient.Do(r)
}

And set the Transport element of your http.Client to the above espRoundTripper:

cl := &http.Client{
Transport: newESPRoundTripper(apiKey),
}

This will make sure that you have the api key embedded in the request automatically.

That’s it for now. Hopefully this was helpful for you.

Happy hacking!

Go/perl hacker; author of peco; works @ Mercari; ex-mastermind of builderscon; Proud father of three boys;