Deploying to Knative from a Private GitHub Repo - Go

This sample demonstrates:

  • Pulling source code from a private Github repository using a deploy-key
  • Pushing a Docker container to a private DockerHub repository using a username / password
  • Deploying to Knative Serving using image pull secrets

Before you begin


1. Setting up the default service account

Knative Serving will run pods as the default service account in the namespace where you created your resources. You can see its body by entering the following command:

$ kubectl get serviceaccount default --output yaml
apiVersion: v1
kind: ServiceAccount
  name: default
  namespace: default
- name: default-token-zd84v

We are going to add to this an image pull Secret.

  1. Create your image pull Secret with the following command, replacing values as neccesary:
   kubectl create secret docker-registry dockerhub-pull-secret \
   --docker-server= \
   --docker-username=<your-name> --docker-password=<your-pword>

To learn more about Kubernetes pull Secrets, see Creating a Secret in the cluster that holds your authorization token.

  1. Add the newly created imagePullSecret to your default service account by entering:
   kubectl edit serviceaccount default

This will open the resource in your default text editor. Under secrets:, add:

     - name: default-token-zd84v
   # This is the secret we just created:
     - name: dockerhub-pull-secret

2. Configuring the build

The objects in this section are all defined in build-bot.yaml, and the fields that need to be changed say REPLACE_ME. Open the build-bot.yaml file and make the necessary replacements.

The following sections explain the different configurations in the build-bot.yaml file, as well as the necessary changes for each section.

Setting up our Build service account

To separate our Build’s credentials from our applications credentials, the Build runs as its own service account:

apiVersion: v1
kind: ServiceAccount
  name: build-bot
  - name: deploy-key
  - name: dockerhub-push-secrets

Creating a deploy key

You can set up a deploy key for a private Github repository following these instructions. The deploy key in the build-bot.yaml file in this folder is real; you do not need to change it for the sample to work.

apiVersion: v1
kind: Secret
  name: deploy-key
    # This tells us that this credential is for use with
    # repositories.
  # Generated by:
  # cat id_rsa | base64 -w 0
  ssh-privatekey: <long string>

  # Generated by:
  # ssh-keyscan | base64 -w 0
  known_hosts: <long string>

Creating a DockerHub push credential

Create a new Secret for your DockerHub credentials. Replace the necessary values:

apiVersion: v1
kind: Secret
  name: dockerhub-push-secrets
  username: <dockerhub-user>
  password: <dockerhub-password>

Creating the build bot

When finished with the replacements, create the build bot by entering the following command:

kubectl create --filename build-bot.yaml

3. Installing a Build template and updating manifest.yaml

  1. Install the Kaniko build template by entering the following command:
   kubectl apply --filename
  1. Open manifest.yaml and substitute your private DockerHub repository name for REPLACE_ME.

Deploying your application

At this point, you’re ready to deploy your application:

kubectl create --filename manifest.yaml

To make sure everything works, capture the host URL and the IP of the ingress endpoint in environment variables:

# Put the Host URL into an environment variable.
export SERVICE_HOST=$(kubectl get route private-repos \
  --output jsonpath="{.status.domain}")
# In Knative 0.2.x and prior versions, the `knative-ingressgateway` service was used instead of `istio-ingressgateway`.

# The use of `knative-ingressgateway` is deprecated in Knative v0.3.x.
# Use `istio-ingressgateway` instead, since `knative-ingressgateway`
# will be removed in Knative v0.4.
if kubectl get configmap config-istio -n knative-serving &> /dev/null; then

# Put the IP address into an environment variable
export SERVICE_IP=$(kubectl get svc $INGRESSGATEWAY --namespace istio-system \
    --output jsonpath="{.status.loadBalancer.ingress[*].ip}")

Note: If your cluster is running outside a cloud provider (for example, on Minikube), your services will never get an external IP address. In that case, use the Istio hostIP and nodePort as the service IP:

export SERVICE_IP=$(kubectl get po --selector $INGRESSGATEWAY_LABEL=ingressgateway --namespace istio-system \
  --output 'jsonpath= .  {.items[0].status.hostIP}'):$(kubectl get svc $INGRESSGATEWAY \
  --namespace istio-system --output 'jsonpath={.spec.ports[? (@.port==80)].nodePort}')

Now curl the service IP to make sure the deployment succeeded:

curl -H "Host: $SERVICE_HOST" http://$SERVICE_IP

Appendix: Sample Code

The sample code is in a private Github repository consisting of two files.

  1. Dockerfile
   # Use golang:alpine to optimize the image size.
   # See for more information
   # about the difference between golang and golang:alpine.
   FROM golang:alpine


   ADD . /go/src/

   RUN CGO_ENABLED=0 go build

   ENTRYPOINT ["knative-build"]
  1. main.go
   package main

   import (

   const (
   	   port = ":8080"

   func helloWorld(w http.ResponseWriter, r *http.Request) {

   func main() {