{"schema_version":"1.7.2","id":"OESA-2026-1010","modified":"2026-01-09T14:05:42Z","published":"2026-01-09T14:05:42Z","upstream":["CVE-2025-38211","CVE-2025-38375","CVE-2025-68349"],"summary":"kernel security update","details":"The Linux Kernel, the operating system core itself.\r\n\r\nSecurity Fix(es):\n\nIn the Linux kernel, the following vulnerability has been resolved:\n\nRDMA/iwcm: Fix use-after-free of work objects after cm_id destruction\n\nThe commit 59c68ac31e15 (&quot;iw_cm: free cm_id resources on the last\nderef&quot;) simplified cm_id resource management by freeing cm_id once all\nreferences to the cm_id were removed. The references are removed either\nupon completion of iw_cm event handlers or when the application destroys\nthe cm_id. This commit introduced the use-after-free condition where\ncm_id_private object could still be in use by event handler works during\nthe destruction of cm_id. The commit aee2424246f9 (&quot;RDMA/iwcm: Fix a\nuse-after-free related to destroying CM IDs&quot;) addressed this use-after-\nfree by flushing all pending works at the cm_id destruction.\n\nHowever, still another use-after-free possibility remained. It happens\nwith the work objects allocated for each cm_id_priv within\nalloc_work_entries() during cm_id creation, and subsequently freed in\ndealloc_work_entries() once all references to the cm_id are removed.\nIf the cm_id&apos;s last reference is decremented in the event handler work,\nthe work object for the work itself gets removed, and causes the use-\nafter-free BUG below:\n\n  BUG: KASAN: slab-use-after-free in __pwq_activate_work+0x1ff/0x250\n  Read of size 8 at addr ffff88811f9cf800 by task kworker/u16:1/147091\n\n  CPU: 2 UID: 0 PID: 147091 Comm: kworker/u16:1 Not tainted 6.15.0-rc2+ #27 PREEMPT(voluntary)\n  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-3.fc41 04/01/2014\n  Workqueue:  0x0 (iw_cm_wq)\n  Call Trace:\n   &lt;TASK&gt;\n   dump_stack_lvl+0x6a/0x90\n   print_report+0x174/0x554\n   ? __virt_addr_valid+0x208/0x430\n   ? __pwq_activate_work+0x1ff/0x250\n   kasan_report+0xae/0x170\n   ? __pwq_activate_work+0x1ff/0x250\n   __pwq_activate_work+0x1ff/0x250\n   pwq_dec_nr_in_flight+0x8c5/0xfb0\n   process_one_work+0xc11/0x1460\n   ? __pfx_process_one_work+0x10/0x10\n   ? assign_work+0x16c/0x240\n   worker_thread+0x5ef/0xfd0\n   ? __pfx_worker_thread+0x10/0x10\n   kthread+0x3b0/0x770\n   ? __pfx_kthread+0x10/0x10\n   ? rcu_is_watching+0x11/0xb0\n   ? _raw_spin_unlock_irq+0x24/0x50\n   ? rcu_is_watching+0x11/0xb0\n   ? __pfx_kthread+0x10/0x10\n   ret_from_fork+0x30/0x70\n   ? __pfx_kthread+0x10/0x10\n   ret_from_fork_asm+0x1a/0x30\n   &lt;/TASK&gt;\n\n  Allocated by task 147416:\n   kasan_save_stack+0x2c/0x50\n   kasan_save_track+0x10/0x30\n   __kasan_kmalloc+0xa6/0xb0\n   alloc_work_entries+0xa9/0x260 [iw_cm]\n   iw_cm_connect+0x23/0x4a0 [iw_cm]\n   rdma_connect_locked+0xbfd/0x1920 [rdma_cm]\n   nvme_rdma_cm_handler+0x8e5/0x1b60 [nvme_rdma]\n   cma_cm_event_handler+0xae/0x320 [rdma_cm]\n   cma_work_handler+0x106/0x1b0 [rdma_cm]\n   process_one_work+0x84f/0x1460\n   worker_thread+0x5ef/0xfd0\n   kthread+0x3b0/0x770\n   ret_from_fork+0x30/0x70\n   ret_from_fork_asm+0x1a/0x30\n\n  Freed by task 147091:\n   kasan_save_stack+0x2c/0x50\n   kasan_save_track+0x10/0x30\n   kasan_save_free_info+0x37/0x60\n   __kasan_slab_free+0x4b/0x70\n   kfree+0x13a/0x4b0\n   dealloc_work_entries+0x125/0x1f0 [iw_cm]\n   iwcm_deref_id+0x6f/0xa0 [iw_cm]\n   cm_work_handler+0x136/0x1ba0 [iw_cm]\n   process_one_work+0x84f/0x1460\n   worker_thread+0x5ef/0xfd0\n   kthread+0x3b0/0x770\n   ret_from_fork+0x30/0x70\n   ret_from_fork_asm+0x1a/0x30\n\n  Last potentially related work creation:\n   kasan_save_stack+0x2c/0x50\n   kasan_record_aux_stack+0xa3/0xb0\n   __queue_work+0x2ff/0x1390\n   queue_work_on+0x67/0xc0\n   cm_event_handler+0x46a/0x820 [iw_cm]\n   siw_cm_upcall+0x330/0x650 [siw]\n   siw_cm_work_handler+0x6b9/0x2b20 [siw]\n   process_one_work+0x84f/0x1460\n   worker_thread+0x5ef/0xfd0\n   kthread+0x3b0/0x770\n   ret_from_fork+0x30/0x70\n   ret_from_fork_asm+0x1a/0x30\n\nThis BUG is reproducible by repeating the blktests test case nvme/061\nfor the rdma transport and the siw driver.\n\nTo avoid the use-after-free of cm_id_private work objects, ensure that\nthe last reference to the cm_id is decremented not in the event handler\nworks, but in the cm_id destruction context. For that purpose, mo\n---truncated---(CVE-2025-38211)\n\nIn the Linux kernel, the following vulnerability has been resolved:\n\nvirtio-net: ensure the received length does not exceed allocated size\n\nIn xdp_linearize_page, when reading the following buffers from the ring,\nwe forget to check the received length with the true allocate size. This\ncan lead to an out-of-bound read. This commit adds that missing check.(CVE-2025-38375)\n\nIn the Linux kernel, the following vulnerability has been resolved:\n\nNFSv4/pNFS: Clear NFS_INO_LAYOUTCOMMIT in pnfs_mark_layout_stateid_invalid\n\nFixes a crash when layout is null during this call stack:\n\nwrite_inode\n    -&gt; nfs4_write_inode\n        -&gt; pnfs_layoutcommit_inode\n\npnfs_set_layoutcommit relies on the lseg refcount to keep the layout\naround. Need to clear NFS_INO_LAYOUTCOMMIT otherwise we might attempt\nto reference a null layout.(CVE-2025-68349)","affected":[{"package":{"ecosystem":"openEuler:22.03-LTS-SP4","name":"kernel","purl":"pkg:rpm/openEuler/kernel&distro=openEuler-22.03-LTS-SP4"},"ranges":[{"type":"ECOSYSTEM","events":[{"introduced":"0"},{"fixed":"5.10.0-297.0.0.200.oe2203sp4"}]}],"ecosystem_specific":{"aarch64":["bpftool-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","bpftool-debuginfo-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-debuginfo-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-debugsource-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-devel-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-headers-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-source-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-tools-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-tools-debuginfo-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","kernel-tools-devel-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","perf-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","perf-debuginfo-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","python3-perf-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm","python3-perf-debuginfo-5.10.0-297.0.0.200.oe2203sp4.aarch64.rpm"],"src":["kernel-5.10.0-297.0.0.200.oe2203sp4.src.rpm"],"x86_64":["bpftool-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","bpftool-debuginfo-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-debuginfo-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-debugsource-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-devel-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-headers-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-source-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-tools-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-tools-debuginfo-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","kernel-tools-devel-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","perf-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","perf-debuginfo-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","python3-perf-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm","python3-perf-debuginfo-5.10.0-297.0.0.200.oe2203sp4.x86_64.rpm"]}}],"references":[{"type":"ADVISORY","url":"https://www.openeuler.org/zh/security/security-bulletins/detail/?id=openEuler-SA-2026-1010"},{"type":"ADVISORY","url":"https://nvd.nist.gov/vuln/detail/CVE-2025-38211"},{"type":"ADVISORY","url":"https://nvd.nist.gov/vuln/detail/CVE-2025-38375"},{"type":"ADVISORY","url":"https://nvd.nist.gov/vuln/detail/CVE-2025-68349"}],"database_specific":{"severity":"High"}}
