Being a cluster administrator can come with its own challenges, especially with environments that carry out-of-tree (OOT) cluster modules. Upgrading device plug-ins or different kernel versions can be prone to errors when doing so one-by-one. This is where the Kernel Module Management Operator (KMM) comes in, allowing admins to build, sign, and deploy multiple kernel versions for any kernel module.
KMM is designed to accommodate multiple kernel versions at once for any kernel module. Using this operator can also leverage the hardware acceleration capabilities of Intel Center GPU Flex, allowing for seamless node upgrades, faster application processing, and quicker module deployment.
Setting up KMM
KMM requires an already working OpenShift environment and a registry to push images to. KMM can be installed using OperatorHub in the OpenShift console or via the following kmm.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: openshift-kmm
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: kernel-module-management
namespace: openshift-kmm
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: kernel-module-management
namespace: openshift-kmm
spec:
channel: "stable"
installPlanApproval: Automatic
name: kernel-module-management
source: redhat-operators
sourceNamespace: openshift-marketplace
With:
oc apply -f kmm.yaml
Enabling hardware acceleration
Once installed, KMM can compile and install kernel module drivers for your hardware. Admins can then integrate with the Node Feature Discovery Operator (NFD), which detects hardware features on nodes and labels them for selector use later. NFD automatically adds labels to the nodes that present some characteristics, including if the node has a GPU and which GPU it has.
In using NFD labels, specific custom kernel versions can be targeted for your module deployment and enablement, so that only hosts with the required kernel and the required hardware are enabled for driver activation. This ensures that only compatible drivers are installed on nodes with a supported kernel, which is what makes KMM so valuable.
With NFD integration, KMM can more easily deploy Intel GPU kernels to the intended nodes, while leaving any other nodes unaffected. This process is detailed more in the Developers.redhat.com site:
Final thoughts
This is just one aspect of KMM and kernel modules that can be utilized to reduce the amount of effort required to manage updates in multiple nodes. KMM will let you handle out-of-tree kernel modules in a seamless fashion, until you can later incorporate your drivers upstream and include them in your distribution.
KMM is a community project, which you can test on upstream Kubernetes. There is also a Slack community channel where you can chat with fellow developers and experts about more ways to apply KMM to your own environment.
저자 소개
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.