OpenShift Cluster Console 访问 k8s 源码解析
Jun 16, 2019 23:40 · 1685 words · 4 minute read
需要先切换到 v3.11.0 分支!
我们已经知道了在登录成功后,Cluster Console 会将 k8s 授予的访问令牌直接返回给浏览器。
在 Chrome 浏览器中打开开发者工具并切换至网络标签后刷新页面,我们可以抓取出一些调用 k8s API 的 HTTP 请求,路由为 /api/kubernetes/resources
在 openshift/console 中搜索 /api/kubernetes 来定位:
// pkg/server/server.go
const (
indexPageTemplateName = "index.html"
tokenizerPageTemplateName = "tokener.html"
authLoginEndpoint = "/auth/login"
AuthLoginCallbackEndpoint = "/auth/callback"
AuthLoginSuccessEndpoint = "/"
AuthLoginErrorEndpoint = "/error"
authLogoutEndpoint = "/auth/logout"
k8sProxyEndpoint = "/api/kubernetes/"
prometheusProxyEndpoint = "/api/prometheus"
prometheusTenancyProxyEndpoint = "/api/prometheus-tenancy"
alertManagerProxyEndpoint = "/api/alertmanager"
meteringProxyEndpoint = "/api/metering"
customLogoEndpoint = "/custom-logo"
)
有点类似很多 web 框架中的路由表,继续以 k8sProxyEndpoint 为关键字搜索,找到 pkg/server/server.go 文件的197行:
// pkg/server/server.go
k8sProxy := proxy.NewProxy(s.K8sProxyConfig)
handle(k8sProxyEndpoint, http.StripPrefix(
proxy.SingleJoiningSlash(s.BaseURL.Path, k8sProxyEndpoint),
authHandlerWithUser(func(user *auth.User, w http.ResponseWriter, r *http.Request) {
r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", user.Token))
k8sProxy.ServeHTTP(w, r)
})),
)
先大致的看一下,proxy.NewProxy(s.K8sProxyConfig)
这一操作使用了 k8s 代理方面的配置实例化了一个 *Proxy
对象。再来看一下 K8sProxyConfig
:
// cmd/bridge/main.go
switch *fK8sMode {
case "in-cluster":
host, port := os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT")
if len(host) == 0 || len(port) == 0 {
log.Fatalf("unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined")
}
k8sEndpoint = &url.URL{Scheme: "https", Host: host + ":" + port}
var err error
k8sCertPEM, err = ioutil.ReadFile(k8sInClusterCA)
if err != nil {
log.Fatalf("Error inferring Kubernetes config from environment: %v", err)
}
rootCAs := x509.NewCertPool()
if !rootCAs.AppendCertsFromPEM(k8sCertPEM) {
log.Fatalf("No CA found for the API server")
}
tlsConfig := &tls.Config{RootCAs: rootCAs}
bearerToken, err := ioutil.ReadFile(k8sInClusterBearerToken)
if err != nil {
log.Fatalf("failed to read bearer token: %v", err)
}
srv.K8sProxyConfig = &proxy.Config{
TLSClientConfig: tlsConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: k8sEndpoint,
}
k8sAuthServiceAccountBearerToken = string(bearerToken)
// If running in an OpenShift cluster, set up a proxy to the prometheus-k8s serivce running in the openshift-monitoring namespace.
if *fServiceCAFile != "" {
serviceCertPEM, err := ioutil.ReadFile(*fServiceCAFile)
if err != nil {
log.Fatalf("failed to read service-ca.crt file: %v", err)
}
serviceProxyRootCAs := x509.NewCertPool()
if !serviceProxyRootCAs.AppendCertsFromPEM(serviceCertPEM) {
log.Fatalf("no CA found for Kubernetes services")
}
serviceProxyTLSConfig := &tls.Config{RootCAs: serviceProxyRootCAs}
srv.PrometheusProxyConfig = &proxy.Config{
TLSClientConfig: serviceProxyTLSConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: &url.URL{Scheme: "https", Host: openshiftPrometheusHost, Path: "/api"},
}
srv.PrometheusTenancyProxyConfig = &proxy.Config{
TLSClientConfig: serviceProxyTLSConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: &url.URL{Scheme: "https", Host: openshiftPrometheusTenancyHost, Path: "/api"},
}
srv.AlertManagerProxyConfig = &proxy.Config{
TLSClientConfig: serviceProxyTLSConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: &url.URL{Scheme: "https", Host: openshiftAlertManagerHost, Path: "/api"},
}
srv.MeteringProxyConfig = &proxy.Config{
TLSClientConfig: serviceProxyTLSConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: &url.URL{Scheme: "https", Host: openshiftMeteringHost, Path: "/api"},
}
}
case "off-cluster":
k8sEndpoint = validateFlagIsURL("k8s-mode-off-cluster-endpoint", *fK8sModeOffClusterEndpoint)
srv.K8sProxyConfig = &proxy.Config{
TLSClientConfig: &tls.Config{
InsecureSkipVerify: *fK8sModeOffClusterSkipVerifyTLS,
},
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: k8sEndpoint,
}
default:
flagFatalf("k8s-mode", "must be one of: in-cluster, off-cluster")
}
而 fK8sMode
默认为 "in-cluster"
,所以:
// case "in-cluster":
srv.K8sProxyConfig = &proxy.Config{
TLSClientConfig: tlsConfig,
HeaderBlacklist: []string{"Cookie", "X-CSRFToken"},
Endpoint: k8sEndpoint,
}
Endpoint 字段对应了 k8s 的服务地址。
回到 pkg/server/server.go 看一下 authHandlerWithUser
是一个方法变量:
// pkg/server/server.go
authHandlerWithUser := func(hf func(*auth.User, http.ResponseWriter, *http.Request)) http.Handler {
return authMiddlewareWithUser(s.Auther, hf)
}
再回到 pkg/server/server.go 199 行:
// pkg/server/server.go
authHandlerWithUser(func(user *auth.User, w http.ResponseWriter, r *http.Request) {
r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", user.Token))
k8sProxy.ServeHTTP(w, r)
})
这个嵌套的有点深,我们自己改写一下就清晰多了:
authMiddlewareWithUser(s.Auther, func(user *auth.User, w http.ResponseWriter, r *http.Request) {
r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", user.Token))
k8sProxy.ServeHTTP(w, r)
})
然后找到 authMiddlewareWithUser()
函数:
// pkg/server/middleware.go
func authMiddlewareWithUser(a *auth.Authenticator, handlerFunc func(user *auth.User, w http.ResponseWriter, r *http.Request)) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
user, err := a.Authenticate(r)
if err != nil {
plog.Infof("authentication failed: %v", err)
w.WriteHeader(http.StatusUnauthorized)
return
}
r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", user.Token))
if err := a.VerifySourceOrigin(r); err != nil {
plog.Infof("invalid source origin: %v", err)
w.WriteHeader(http.StatusForbidden)
return
}
if err := a.VerifyCSRFToken(r); err != nil {
plog.Infof("invalid CSRFToken: %v", err)
w.WriteHeader(http.StatusForbidden)
return
}
handlerFunc(user, w, r)
})
}
查看一下 Authenticator
认证器对象的定义:
// pkg/auth/auth.go
type Authenticator struct {
authFunc func() (*oauth2.Config, loginMethod)
clientFunc func() *http.Client
// userFunc returns the User associated with the cookie from a request.
// This is not part of loginMethod to avoid creating an unnecessary
// HTTP client for every call.
userFunc func(*http.Request) (*User, error)
errorURL string
successURL string
cookiePath string
refererURL *url.URL
secureCookies bool
}
身份验证其实用了 Authenticator
认证器对象的 userFunc()
方法,这就需要找出这个方法的实现。这个文件中有一个 NewAuthenticator()
函数,就是 Authenticator
认证器的构造函数:
// pkg/auth/auth.go
func NewAuthenticator(ctx context.Context, c *Config) (*Authenticator, error) {
// ...
switch c.AuthSource {
case AuthSourceOpenShift:
a.userFunc = getOpenShiftUser
default:
a.userFunc = func(r *http.Request) (*User, error) {
if oidcAuthSource == nil {
return nil, fmt.Errorf("OIDC auth source is not intialized")
}
return oidcAuthSource.authenticate(r)
}
}
}
根据配置中的 AuthSource
项分成两种情况,而这个配置对象是作为参数带进来的,所以先找一下构造函数调用的地方也就是 *Config
对象的入参点,又回到了 main.go 文件(也一定会在这里)405 行:
// cmd/bridge/main.go
func main() {
// ...
switch *fUserAuth {
case "oidc", "openshift":
// ...
oidcClientConfig := &auth.Config{
AuthSource: authSource,
IssuerURL: userAuthOIDCIssuerURL.String(),
IssuerCA: *fUserAuthOIDCCAFile,
ClientID: *fUserAuthOIDCClientID,
ClientSecret: oidcClientSecret,
RedirectURL: proxy.SingleJoiningSlash(srv.BaseURL.String(), server.AuthLoginCallbackEndpoint),
Scope: scopes,
// Use the k8s CA file for OpenShift OAuth metadata discovery.
// This might be different than IssuerCA.
K8sCA: caCertFilePath,
ErrorURL: authLoginErrorEndpoint,
SuccessURL: authLoginSuccessEndpoint,
CookiePath: cookiePath,
RefererPath: refererPath,
SecureCookies: secureCookies,
}
// ...
if srv.Auther, err = auth.NewAuthenticator(context.Background(), oidcClientConfig); err != nil {
log.Fatalf("Error initializing authenticator: %v", err)
}
}
}
这里要去 OpenShift 集群上查看一下 openshift-console deployment 的定义,oc get deployments -n openshift-console -o yaml
:
template:
spec:
containers:
- command:
- /opt/bridge/bin/bridge
- --public-dir=/opt/bridge/static
- --config=/var/console-config/console-config.yaml
image: docker.io/openshift/origin-console:v3.11.0
imagePullPolicy: IfNotPresent
bridge 在启动时读入了一份配置文件 /var/console-config/console-config.yaml,在 cmd/bridge/config.go 文件的156行有一个 addAuth()
方法,只要使用了配置文件,就默认了 fUserAuth
选项为 openshift。所以 authSource
变量也就被设置成了 AuthSourceOpenShift
。
回到认证器构造函数所在的 pkg/auth/auth.go 文件,switch
处应该选择 case AuthSourceOpenShift
,认证器的 userFunc
被赋值成 getOpenShiftUser()
函数:
// pkg/auth/auth_openshift.go
func getOpenShiftUser(r *http.Request) (*User, error) {
// TODO: This doesn't do any validation of the cookie with the assumption that the
// API server will reject tokens it doesn't recognize. If we want to keep some backend
// state we should sign this cookie. If not there's not much we can do.
cookie, err := r.Cookie(openshiftSessionCookieName) // const openshiftSessionCookieName = "openshift-session-token"
if err != nil {
return nil, err
}
if cookie.Value == "" {
return nil, fmt.Errorf("unauthenticated")
}
return &User{
Token: cookie.Value,
}, nil
}
只是简单地从 HTTP 请求的 Cookie 中取出 openshift-session-token 对应的值。这个 openshift-session-token 就是 k8s 服务提供方的访问令牌,在 OpenShift Cluster Console 登录源码解析 中已经分析过。
再回到 pkg/server/middleware.go 31行:
// pkg/server/middleware.go
r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", user.Token))
这里的 HTTP 请求头已经按照 OAuth2 规定的样式带上了访问令牌,随后就会执行 k8sProxy.ServeHTTP(w, r)
代理当前的 HTTP 请求,k8s 将会检查访问令牌是否合法并返回用户请求的资源。
因为 Cluster Console 并不会处理 Cookie 中 openshift-session-token 键对应的值是否合法,而是完全递交给 k8s 来校验身份,这也就解释了为什么重启 Cluster Console 服务器应用程序后用户并不需要再次登录。