前言
我们知道真正提供服务的是后端的pod,但是为了负载均衡,为了使用域名,为了….,service诞生了,再后来ingress诞生了,那么为什么需要有Ingress呢?先看看官网怎么说的:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
所以,Ingress的作用主要有四:
1)帮助位于集群中的Service能够有一个对外可达的url,即让集群外的客户端也可以访问到自己。(wxy:对于这一点,NodePort类型的Service也可以,后面会说到)
2)做专业的负载均衡,毕竟Service的负载均衡还是很初级的
3)终结ssl/tls。意思是说对于那些业务不提供https的,为了安全,可以有专门机构帮我们做安全方面的事,业务只需要专注业务就行了,所以可以说是”过滤”ssl/tls
4)基于名称的虚拟hosting。这个我理解就是我们常说的Ingress是一个基于应用层提供服务的,因为Ingress不仅负责一个业务/Service, 而是可以根据名称区分不同的”hosting”….(wxy: 继续看看可能就理解了)
那么,上述的四项功能就是Ingress帮我们实现的么?其实不是的(所以说是Ingress的作用是不准确的),而是需要有一个Ingress Controller来实现这个功能,而Ingress只不过是作为集群中的Service的”发言人”,去和Ingress Controller打交道,相当于去IC那里注册一下,告知你要帮我基于怎样的规则转发流量,即在上述四个方面帮我业务Service对外暴露。
好了,那就让我们一起看看到底怎么将Ingress用起来吧。
一: 创建业务的pod/service
1. 关于业务的pod,基本信息如下:
# kubectl get pods -ncattle-system-my -oyaml rancher-57f75c44f4-2mrz6
...
containers:
- args:
- --http-listen-port=80
- --https-listen-port=443
- --add-local=auto
...
name: rancher
ports:
- containerPort: 80 ---这个表示业务容器有暴露80端口号
protocol: TCP
...
实际的容器的端口号:
sh-4.4# cat /usr/bin/entrypoint.sh
#!/bin/bash
set -e
exec tini — rancher –http-listen-port=80 –https-listen-port=443 –audit-log-path=${AUDIT_LOG_PATH} –audit-level=${AUDIT_LEVEL} –audit-log-maxage=${AUDIT_LOG_MAXAGE} –audit-log-maxbackup=${AUDIT_LOG_MAXBACKUP} –audit-log-maxsize=${AUDIT_LOG_MAXSIZE} “${@}”
2.此时业务的service如下
# kubectl get svc -ncattle-system-my -oyaml ... spec: clusterIP: 10.105.53.47 ports: - name: http port: 80 ---只需要在80端口号提供http服务即可,因为ingress会为我们terminal https protocol: TCP targetPort: 80 selector: app: rancher type: ClusterIP ---注意,如果想要使用Ingres,那么service的类型一般就是ClusterIP就可以了,详细后面的总结 status: loadBalancer: {} ---此时的状态是这样的,表示没做什么负载均衡
二: 创建Ingress, 并为业务服务配置规则
# kubectl get ingress -ncattle-system-my -oyaml ... spec: rules: - host: rancher.my.test.org ---规则1: 对应的host即域名为他 http: 这条规则是for上面创建的那个名叫rancher的service, 会访问这个服务的80端口 paths:
- path: /example ---可省略 backend: serviceName: rancher servicePort: 80
- host: bar.foo.com ---这个是用来解释何为"name based virtual hosting"的 http: paths: - backend: serviceName: service2 servicePort: 80
tls: for https,使用的证书信息在名叫tls-rancher-ingress的secret中 - hosts: - rancher.my.test.org secretName: tls-rancher-ingress
0.首先看看官网是怎么描述Ingress的各个字段的含义
The Ingress spec has all the information needed to configure a load balancer or proxy server.
Ingress resource only supports rules for directing HTTP traffic.
即: 这些信息都是给真正的负责均衡或者说代理者用的,并且目前只用于http协议的转发,https是http + tls,也是基于http的。
规则包含两部分: http rule 和 tls rule(如果是https的话,否则就不需要)
1.每一个http rule部分,承载了三部分信息
host(可选): 表示这条规则应用在哪个host上,如果没有配置,则这条规则将应用到all inbound HTTP traffic through the IP address specified(所有经过指定ip地址的入站http流量)。
A list of paths: 每一个path还结合一组serviceName 和servicePort。当LB接收到一个incoming流量,只有当这个流量中的content匹配了host 和 path后,才会被转发给后端的Service。注意可以省略”path”字段,那就表示”根”
backend : 即表示真正的后端Service,也即serviceName 和 servicePort指向的那个后端service。
其中解释下关于Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
即,可以在一台代理机器上,为多个服务代理流量转发,只要大家的host不同就可以,这里的host可以理解成域名。
2. tls rule
关于ingress的tls,有如下的知识点
1).only supports a single TLS port, 443, and assumes TLS termination.
2).可以看到,在tls中的rule也可以指定host,如果这个host和http rule部分中不一样,则they are multiplexed on the same port according to the hostname specified through the SNI TLS extension (假设 the Ingress controller 支持SNI) —SNI是什么鬼,以后再研究吧
3) tls中指定的secret中,必须要包含keys named tls.crt
and tls.key
that contain the certificate and private key to use for TLS。
然后,certificate 中必须包含的CN,并且和ingress中的配置 - hosts
中一致
3).Ingress中的secrete中的证书是给controller用的,即告诉controller使用我要求的证书和客户端进行tls连接
4). 关于负载均衡:
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service。
解析:一个ic可能在启动的时候就加载了一下负载均衡策略,这个策略会应用到后方的所有ingress上。
但目前还不支持通过Ingress配置ic的负载均衡策略,不过你可以在service上配置复杂的负载均衡策略在实现你的要求。
wxy:通过ingress让ingress controller帮我们为业务负载均衡恐怕不行,不过你可以让自己的业务的service负责
三: 创建Ingress Controller来实现ingress
0.关于Ingress Controler,官网是这样解释的
You may deploy any number of ingress controllers within a cluster. When you create an ingress, you should annotate each ingress with the appropriate ingress.class to indicate which ingress controller should be used if more than one exists within your cluster. If you do not define a class, your cloud provider may use a default ingress controller. Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently. Note: Make sure you review your ingress controller’s documentation to understand the caveats of choosing it. An admin can deploy an Ingress controller such that it only satisfies Ingress from a given namespace, but by default, controllers will watch the entire Kubernetes cluster for unsatisfied Ingress.
解析:
真正让一个ingress工作,一个集群中需要有一个ingress controller(这种controller不同于kube-controller-manager管理的其他类型controller, 他管理pod的方式是用户自定义的,目前支持的ingress controller是 GCE and nginx controllers. 另外这种controller也是一种deployment)。集群中也可以部署若干个ingress controller, 但这时你的ingress就需要利用ingress.class这个annotate来指明你想用哪个ic,如果你没有定义一个class,那么你的云provider会使用缺省的ic。另外,新版的k8s已经使用ingressClassName字段取代annotate方式了。
尽管,管理员可以部署一个只应用于某个给定的ns的ic,但是缺省情况,controller应该可以watch到整个k8s集群中不满足的Ingress.
wxy:关于ingress.class将会在下一个章节中详细讲解,这里由于只有一个ingress controller,所以暂时先忽略这一项。
在我实际的操作中,ingress和ingress controller的工作原理是这样的:
首先,ingress创建,并准备好http rule(path部分) 和 tls rule(证书部分)
然后,一种类型的ingress controller比如ngxin创建,这种ic负责的ingress假设配置成负责整个集群(nginx官网提供的缺省配置就是如此)
此时,ingress controller会主动watch 整个集群的ingress,然后读取这个ingress的信息并添加到自己的代理规则中,同时更新ingress的相关字段
具体的实验过程战术如下:
0. 官方ngxin controller是这样描述自己的
ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: "nginx" annotation.
Please note that the ingress resource should be placed inside the same namespace of the backend resource.
解析: IC会自动去发现所有注解有”nginx”的ingress,另外要求ingress和其代理的backend(Service)位于同一个ns。
wxy: ic和ingress无需同一个namespace,一般ic是全局的…..
1.创建Ingress Controller,以及相关的资源
#kubectl apply -f ./mandatory.yaml
mandatory.yaml文件的主要内容如下:
[root@node213 wxy]# cat mandatory.yaml --- #0. 先创建一个专属的namespace apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- #1.创建三个配置,这些配置都是给controller即ngixn使用的,缺省情况下这三项配置都是空的 kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx kind: ConfigMap apiVersion: v1 metadata: name: tcp-services name: udp-services --- #2.创建一个服务账号给controller使用,并给这个账号赋予一定的权限,其中对资源是Ingress类型具有watch和update其status的权限 apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole ... - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch ---可以watch整个cluster的ingress - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update ---并可以更新ingress的status apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding roleRef: name: nginx-ingress-role subjects: name: nginx-ingress-serviceaccount apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding roleRef: name: nginx-ingress-clusterrole subjects: namespace: ingress-nginx --- #3. 创建一种ingress controller(Deployment类型的),该Deployment会级联创建对应的pod实例 apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ... serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io ... ports: - name: http containerPort: 80 - name: https containerPort: 443 ---
View Code
2. 创建ingress controller的服务
#kubectl apply -f ./service-nodeport.yaml
ic也不过是个pod,所以作为总代理的他也要能够对外暴露自己,也正因作为总代理,所以ic的服务一般是NodePort类型或更大范围
附加:因为是内网环境,需要提前下载需要的文件和镜像,在这里由于受墙所限,文件我是从github上下载的
1)部署文件
https://github.com/nginxinc/kubernetes-ingress
2)使用的镜像
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
3. Ingress被实现
ingress controller会自动找到需要被satisfies的ingress,然后读取的他内容添加到自己的规则中,同时更新ingress的信息
1)ingress controler的service信息
# kubectl get svc -ningress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.106.118.8 <none> 80:56901/TCP,443:25064/TCP 26h
2) ingress contrller watch 到ingress的日志
status.go:287] updating Ingress wxy-test/rancher status from [] to [{10.106.118.8 }] event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"937884", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress my-cattle-system/rancher event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"937885", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress my-cattle-system/rancher
3) Ingress的信息被刷新
# kubectl get ingress -nmy-cattle-system -oyaml annotations:
--被实现后,新增一个注解 field.cattle.io/publicEndpoints: '[{"addresses":["10.100.126.179"],"port":443,"protocol":"HTTPS","serviceName":"wxy-test:rancher","ingressName":"wxy-test:rancher","hostname":"rancher.test.org","allNodes":false}]' nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" ---创建ingress就添加的注解,是不是因为该注解的前缀符合nginx ingress controller,所以被nginx "看上了"? nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" ... status: loadBalancer: ingress: ---新增的,为ingress controller的service的地址 - ip: 10.106.118.8
4. 详细解析一下nginx的转发规则
进入nginx实例,看看ingress这个转发规则和证书是怎样作用于nginx的
# kubectl exec -ti -ningress-nginx nginx-ingress-controller-744b8ccf4c-mdnws /bin/sh $ cat ./nginx.conf ... ## start server rancher.my.test.org server { server_name rancher.my.test.org ; listen 80 ; listen [::]:80 ; listen 443 ssl http2 ; listen [::]:443 ssl http2 ; set $proxy_upstream_name "-"; ssl_certificate_by_lua_block { certificate.call() } location / { set $namespace "my-cattle-system "; set $ingress_name "rancher"; set $service_name "rancher"; set $service_port "80"; set $location_path "/"; ...
四: 如何通过Ingress访问业务
验证方式1:curl方式访问API
# IC_HTTPS_PORT=25064 —nginx的service暴露的nodePort
# IC_IP=192.168.48.213 —nginx的集群节点ip,任意一个就行
# curl –resolve rancher.test.org:$IC_HTTPS_PORT:$IC_IP https://rancher.test.org:$IC_HTTPS_PORT –insecure
结果:
{“type”:”collection”,”links”:{“self”:”https://rancher.test.org:25064/”},”actions”:{},”pagination”:{“limit”:1000,”total”:4},”sort”:{“order”:”asc”,”reverse”:”https://rancher.test.org:25064/?order=desc”},”resourceType”:”apiRoot”,”data”:[{“apiVersion”:{“group”:”meta.cattle.io”,”path”:”/meta”,”version”:”v1″},”baseType”:”apiRoot”,”links”:{“apiRoots”:”https://rancher.test.org:25064/meta/apiroots”,”root”:”h
…终于成功了
或者,在访问者的机器上
# vi /etc/hosts
192.168.48.213 rancher.test.org —增加
于是,就可以访问
# curl https://rancher.test.org:25064 -k
验证方式2:浏览器方式
1)首先,因为是自定义的域名所以官方dns是不认的,所以就需要在访问者机子上直接添加上对应的域名解析的结果,在C:WindowsSystem32driversetchosts增加:
192.168.48.214 rancher.test.org # source server
2)然后,浏览器上访问
https://rancher.test.org:25064
注意:
一定要使用域名访问,否则
# curl https://192.168.48.213:42060 -k <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>openresty/1.15.8.2</center> </body> </html>
即,这种访问方式表示直接访问的是ngxin,然后nginx并不知道你真正的后端想要访问谁
四:详细说说Ingress Class
0. 官网说
1)在Kubernetes 1.18之前,使用ingress.class注解(信息来自nginx官网)
If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE,
you need to specify the annotation kubernetes.io/ingress.class: "nginx" in all ingresses that you would like the ingress-nginx controller to claim.
解析: 这个注解是给nginx看的,nginx 类型的ingress controller会去watch”想要我”的ingress,然后将其添加到我的”管辖范围中”
2)在Kubernetes 1.18之后,使用IngressClass类型object 和 ingressClassName字段
ingress可以由不同的controller所实现,并且不同的controller对应不同的配置,这是如何做到的呢?就是通过IngressClass这种资源,具体来说就是:
在ingress中,可以为其指定一个class,即这个ingress对应的IngressClass类型的资源
在这个IngressClass中,包含了两部分的参数:
1)controller,表示这一类对应的ingress controller具体是谁,官方说:the name of the controller that should implement the class.
2)parameters(可选),是TypedLocalObjectReference类型的参数,是一个
is a link to a custom resource containing additional configuration for the controller.
也就是说,是给controller用的,即如果这个controller还需要些额外的配置信息,根据如下三元素就能找到承载配置的那个object
例:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: external-lb ---该类的名称
spec:
controller: example.com/ingress-controller --要求集群中使用这个ingress controller
parameters: ---使用这个controller的时候,还需要引用一种称为IngressParameters的crd,这个crd的名称叫做external-lb
apiGroup: k8s.example.com/v1alpha
kind: IngressParameters
name: external-lb
1.手动为Ingress添加ingress.class注解,查看nginx ingress controller的反应
如果没有显式创建class,发现如果为ingress指定
kubernetes.io/ingress.class: “nginx-1″则会将其从缺省的ic中删除
在改成kubernetes.io/ingress.class: “nginx”就又添加进来了。详细的日志如下
# kubectl logs -f -ningress-nginx nginx-ingress-controller-744b8ccf4c-8wnkn ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.26.1 Build: git-2de5a893a Repository: https://github.com/kubernetes/ingress-nginx nginx version: openresty/1.15.8.2 ------------------------------------------------------------------------------- 0,准备工作,包括加载一些配置,初始化一个访问我的url(https端口号缺省443), 还会创建一个假的证书(干什么用的?) flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. main.go:182] Creating API client for https://10.96.0.1:443 main.go:226] Running in Kubernetes cluster version v1.12 (v1.12.1) - git (clean) commit 4ed3216f3ec431b140b1d899130a69fc671678f4 - platform linux/amd64 main.go:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem main.go:105] Using deprecated "k8s.io/api/extensions/v1beta1" package because Kubernetes version is < v1.14.0 1.启动nginx controller,说到controller,顾名思义是掌控全局的,即不断watch各种资源,产生各种event,进而处理 此时watch到在cattle-system-my这个namespace下有个ingress叫做rancher nginx.go:263] Starting NGINX Ingress controller event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"84608cf7-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4772030", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"84530b64-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4772026", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"844a2d10-8924-11ea-a935-286ed488c73f", APIVersion:"v1", ResourceVersion:"4851485", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5065141", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress cattle-system-my/rancher 2.在处理rancher这个ingress的时候,通过其"tls"字段得知证书存放在"cattle-system-my/tls-rancher-ingress"这个secret中 但是没有找到这个secret,于是报错 (这个过程还会有多实例pod的选举过程,但是貌似和证书关系不大) backend_ssl.go:46] Error obtaining X.509 certificate: key 'tls.crt' missing from Secret "cattle-system-my/tls-rancher-ingress" nginx.go:307] Starting NGINX process leaderelection.go:241] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx... controller.go:1125] Error getting SSL certificate "cattle-system-my/tls-rancher-ingress": local SSL certificate cattle-system-my/tls-rancher-ingress was not found. Using default certificate controller.go:134] Configuration changes detected, backend reload required. status.go:86] new leader elected: nginx-ingress-controller-744b8ccf4c-mdnws controller.go:150] Backend successfully reloaded. controller.go:159] Initial sync, sleeping for 1 second. controller.go:1125] Error getting SSL certificate "cattle-system-my/tls-rancher-ingress": local SSL certificate cattle-system-my/tls-rancher-ingress was not found. Using default certificate ... 3.操作1:手动在这个命名空间下找了些证书素材创建了对应的secret ----- kubectl create secret tls wxy-test --key tls.key --cert tls.crt ----- 但是经过nginx的解析后发现,证书中的指定的CN(域名)和ingress的"rule"中指定的域名不符,于是仍然报错, 然后使用nginx的缺省证书 store.go:446] secret cattle-system-my/tls-rancher-ingress was updated and it is used in ingress annotations. Parsing... backend_ssl.go:66] Adding Secret "cattle-system-my/tls-rancher-ingress" to the local store controller.go:1131] Unexpected error validating SSL certificate "cattle-system-my/tls-rancher-ingress" for server "rancher.my.test.org": x509: certificate is valid for rancher.test.org, not rancher.my.test.org controller.go:1132] Validating certificate against DNS names. This will be deprecated in a future version. controller.go:1137] SSL certificate "cattle-system-my/tls-rancher-ingress" does not contain a Common Name or Subject Alternative Name for server "rancher.my.test.org": x509: certificate is valid for rancher.test.org, not rancher.my.test.org controller.go:1139] Using default certificate 4. 操作2: 重新生成证书素材,并修改secret ----- openssl req -x509 -nodes -days 2920 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=rancher.my.test.org/O=nginxsvc" kubectl create secret tls wxy-test --key tls.key --cert tls.crt ----- 此时,证书重新正确加载 store.go:446] secret cattle-system-my/tls-rancher-ingress was updated and it is used in ingress annotations. Parsing... backend_ssl.go:58] Updating Secret "cattle-system-my/tls-rancher-ingress" in the local store 5.操作3: 新增加一个ingress,controller会watch到这个default命名空间下的Ingress,然后读进来开始解析: ----- # kubectl apply -f ./test_ingres --validate=false ----- 1)首先根据配置的"rule"的字段寻找对应的Service,但是在default命名空间下并没有找到名叫test的service,于是报错 2)貌似会在controller的ns下找一个Service给他作为缺省,因为根据10.106.118.8指导,这个是ic的Service event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-ingress", UID:"1fc6a4ce-8ea8-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6913281", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/test-ingress controller.go:811] Error obtaining Endpoints for Service "default/test": no object matching key "default/test" in local store controller.go:134] Configuration changes detected, backend reload required. controller.go:150] Backend successfully reloaded status.go:287] updating Ingress default/test-ingress status from [] to [{10.106.118.8 }] event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-ingress", UID:"1fc6a4ce-8ea8-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6913380", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/test-ingress controller.go:811] Error obtaining Endpoints for Service "default/test": no object matching key "default/test" in local store x n 6. 操作4:为一个ingress添加class的annotation,指定实现这个ingress的ic由nginx-1这个class决定 貌似ic缺省属于nginx这个class(缺省部署方式下并没有发现对应的IngressClass类型资源) 于是ngixn会将这个ingress对应的规则从自己的配置中删除 载3设置成ngixn,则又会添加回来 ----- # kubectl edit ingress rancher -ncattle-system-my kubernetes.io/ingress.class: "nginx-1" kubernetes.io/ingress.class: "nginx" ----- store.go:381] removing ingress rancher based on annotation kubernetes.io/ingress.class event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"5065141", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress cattle-system-my/rancher controller.go:134] Configuration changes detected, backend reload required. controller.go:150] Backend successfully reloaded. store.go:378] creating ingress rancher based on annotation kubernetes.io/ingress.class event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"cattle-system-my", Name:"rancher", UID:"4acafd23-89f5-11ea-a935-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"6963589", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress cattle-system-my/rancher controller.go:134] Configuration changes detected, backend reload required. controller.go:150] Backend successfully reloaded.
View Code
2.关于nginx.ingress.kubernetes.io/ingress.class这个注解
如果配置成
nginx.ingress.kubernetes.io/ingress.class: nginx-1
则,ingress不会被删除,ic那边的日志如下:
I0628 02:06:58.235693 9 event.go:255] Event(v1.ObjectReference{Kind:”Ingress”, Namespace:”wxy-test”, Name:”rancher”, UID:”b042e1c5-b851-11ea-9fd1-286ed488c73f”, APIVersion:”networking.k8s.io/v1beta1″, ResourceVersion:”1058054″, FieldPath:””}): type: ‘Normal’ reason: ‘UPDATE’ Ingress wxy-test/rancher
这说明,对于前缀有ngxin字样的注解,ngxin controller可以识别到属于自己管辖的范畴,所以无论设置成什么,都会被nginx controller watch到
wxy碎碎念:
这玩意和pv/pvc的class是类似的,貌似所有的class都是起到类似的效果:“媒婆”
ingress需要有controller去实现我制定的规则,所以我让class宣告:我ingress需要什么样子的controller,如果有这样的controller请速速到我的碗里来!
附:
typedLocalObjectReference:
contains enough information to let you locate the typed referenced object inside the same namespace.
解析: 提供足够的信息,让你可以定位到同namespace下的你所引用的那种类型的object
五: 配置Ingress的注解,实现更强大的功能
1.nginx.ingress.kubernetes.io/ssl-redirect
使用场景:用于308报错或301报错
首先,curl http://stickyingress.example.com:28217,ok
然后,修改ingress,增加tls的内容,然后也访问了对应的https,
最后,再访问http地址,报错
HTTP/1.1 308 Permanent Redirect Server: openresty/1.15.8.2 Date: Sun, 28 Jun 2020 10:44:30 GMT Content-Type: text/html Content-Length: 177 Connection: keep-alive Location: https://stickyingress.example.com/ ---即要求访问https的地址
解决办法:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
2. Session 与 Cookie
nginx.ingress.kubernetes.io/affinity
其实目前支持取值cookie,
六: 总结
1.关于Ingress Controler,官网是这样解释的
另外,使用ingress的话一般是ClusterIP类型的Service,因为官网都说了,如果你不想用ingress的话,
You can expose a Service in multiple ways that don’t directly involve the Ingress resource:
Use Service.Type=LoadBalancer
Use Service.Type=NodePort
其实也好理解,都直接向外暴露服务了,还需要什么ingress,当然我们都知道如上两种类型的service是 “大于”ClusterIP,所以同样是可以使用ingress,只是没有必要
另外,还有个误区是不要以为使用service就不能用https,还是可以的,是要业务支持https那他自己肯定会想办法搞定tls需要的证书
wxy:我理解,使用ingress是不是也是为了怕把业务的证书暴露出去,因为往往业务的证书都是私有的ca签发的,到了真正的大环境中有时候也是不被承认的…..
首先,ingress自己是没有接收请求的功能,他不过是一堆规则,想要能够接收请求就需要创建一种ingress-controller,在这里我们选择的是nginx。
然后, nginx这个组件想要能够提供服务,还需要一个service将他暴露出去,于是就需要再为ingress-controller部署一个service,并且至少是NodePort类型。
最后,这些nginx官方都给我们想好了,只需要下载对应的manifest,然后根据需要变更参数即可。
wxy碎碎念:
为何需要ngixn来做我们的转发,使用serice不行么?
答:这是一个老生常谈的问题:service是做四层转发,所以对应应用层想使用https的话就不行,所以我们可以利用nginx作为https服务器和外部打交道,然后自己使用http和内网的业务srvice打交道,结合看前面说的ingress的官方定义就知道了,尽管实现这个功能实际上是ingress controller做的
最新评论