当前位置: 首页 > news >正文

Occlusion in Augmented Reality

1.Occlusion in Augmented Reality

笔记来源:
1.Occlusion handling in Augmented Reality context
2.Occlusion in Augmented Reality
3.Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
4.Occlusion Matting: Realistic Occlusion Handling for Augmented Reality Applications

声明:本篇内容大部分来自上述论文,仅做学习使用

1.1 遮挡问题


如果遮挡处理不当,遮挡对象的可视化会导致用户对空间属性的误解,在极端情况下,可能会破坏沉浸式AR体验

本篇论文目标是在未知环境中仅仅使用RGB-D相机处理AR背景下的遮挡问题,更具体的说是聚焦于解决真实物体遮挡了虚拟物体的这种情况(较难解决的情况),不包括虚拟物体遮挡真实物体等其他情况。
对于当虚拟物体遮挡真实物体这种情况,为了处理遮挡,增加虚拟物体的亮度就足够了。
对于真实物体遮挡了虚拟物体这种情况,则需要使用图像处理方法来解决这个问题。更详细地说,在该方法中,应用图像处理方法来分析和合并来自深度和RGB传感器的数据。

1.2 遮挡问题的类型


(1)真实环境中物体之间的遮挡(单单只发生在真实环境中)
(2)虚拟物体之间的遮挡(单单只发生在虚拟环境中)
(3)虚拟物体和真实环境中物体之间的遮挡(发生在虚实共存的人为创造的环境中)

1.3 相关方法

1.3.1 Object-based method

来自论文:Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
使用物体的轮廓处理遮挡问题
这种方法的前提假设是静态场景,在该场景中真实物体和虚拟物体的遮挡关系是固定的。
处理步骤:
(1)遮挡物体的选择:用户使用交互式分割方法选择遮挡对象
(2)物体跟踪:在后续帧中实时跟踪所选对象轮廓
(3)遮挡处理:将被跟踪物体上的所有像素重新绘制到未处理的增强图像上,生成真实和虚拟物体之间相对位置正确的新合成图像。

1.3.2 Model-based Method

对于简单的静态场景,可以直接对该场景进行三维重建,从而把真实场景转换为虚拟场景,相当于把虚-实遮挡问题转变为了虚-虚遮挡问题
对于没有任何先验的未知静态环境,可以使用实时稠密三维重建来解决虚实遮挡问题,但对于高动态场景则效果并不理想,
如果重建无法还原真实细节,则渲染遮挡的质量也会大幅下降

1.3.3 Depth-based Method

什么是trimap?
A trimap is a type of image used in image processing, particularly in tasks like image matting and segmentation. It helps to delineate different regions of an image, typically into three categories:

  1. Foreground: The object of interest.
  2. Background: The area behind the object of interest.
  3. Unknown or Transition Area: The region where the object blends with the background and is not easily classified as either foreground or background.


来自论文:Occlusion Matting: Realistic Occlusion Handling for Augmented Reality Applications

3D Rendering

目的: 3D 渲染阶段旨在使用颜色 (RGB) 和深度信息创建虚拟场景的视觉表示。这一步至关重要,因为它为后续阶段将虚拟元素与现实世界场景融合提供了基础。

Input: The 3D Rendering stage receives an RGB image and a depth map as inputs.
Output: The output includes a rendered RGB texture (showing color details) and a rendered depth texture (showing depth values).

场景的深度和场景纹理的深度是同一个概念,只是在计算机图形学二者的目的不同,前者为了捕捉场景的结构,后者为了处理纹理效果
对比 depth map 和 depth texture

Adaptive Trimap Generation

目的: 将图像的像素分类为三个区域:foreground、background、unknown。trimap 有助于指导后续阶段,以实现准确的前景和背景传播以及 alpha 估计。
Input:
Rendered Depth Map: Depth information of the virtual scene generated in the 3D rendering stage.
Depth Map from Sensor: Depth information captured by the sensor, indicating distances of real-world objects from the camera.
Color Image (RGB): Color data captured by the sensor, providing visual details of the real-world scene.
Output:
Trimap: An image with three regions:
(1)White (Foreground): Pixels definitely belonging to the foreground.
(2)Black (Background): Pixels definitely belonging to the background.
(3)Gray (Unknown): Pixels whose classification is uncertain.

详细步骤
假设有一个虚拟平面(蓝色正方形)在真实物体(椅子)的背面
(1)深度图含噪声,我们首先使用low pass filter进行平滑去噪
(2)由真实物体的深度图经过coarse segmentation得到coarse trimap
(3)对区域U中的edge进行label,标注其属于前半部分还是后半部分,也就是属于前景F还是背景B
(4)对区域U进行固定大小的膨胀

coarse segmentation具体细节
该过程具体包括:depth map经过depth test 分割出前景图(F)和背景图(B),在有效区域内(虚实共存的区域内)利用3×3 sobel kernel进行卷积得到未知区域(U)

label的具体细节
图(a)是粗分割深度图覆盖到了RGB物体边缘图上的效果,以RGB物体边缘为界,决定未知区域属于前景还是背景
RGB中的物体边界(红色)、由深度图粗分割出来的前景图(F,白色)背景图(B,黑色)和未知区域(U,蓝色)
现在看未知区域到底属于前景还背景,根据下图一目了然,
例如橙黄色区域(b)中,U位于红色边界的后面,也就是说这一小段未知区域属于背景
例如浅蓝色区域(c)中,U位于红色边界的前面,也就是说这一小段未知区域属于前景

dilation的具体细节
检查在前景和背景中的每个像素k,在该像素周围的一个小窗口内是否有在未知区域的像素i,如果有,则将该像素i标记为未知像素并且标记该像素属于前景还是背景中
为什么需要膨胀处理?
未知区域(蓝色)位于深度图像的边界(黑白交界)周围,而不一定位于彩色图像的边界(红色)周围。这对alpha估计有很大的影响,因为一些背景区域被错误地当作前景,反之亦然。为了克服这些问题,我们向彩色图像的边界扩大未知区域。然而,大的膨胀意味着大的未知区域和颜色边界可能被覆盖,但这也意味着已知的前景区域可能会缩小。要保证在解决以上问题的同时保证未知区域尽可能的小。

adaptive dilation
不同形态的边界需要不同的膨胀量
膨胀的量取决于在未知区域中被标记为no edge的点的个数,如果这个数量超过给定阈值,则增加膨胀量

Foreground and Background Propagation

目的: 将前景和背景的已知区域扩展到trimap中识别的未知区域。此步骤有助于细化像素的分类(该未知区域内像素到底归属于前景还是背景)使后续的 alpha 估计更加准确可靠。
Input:
Trimap: An image from the Adaptive Trimap Generation stage, with pixels labeled as foreground, background, or unknown.
Color Image (RGB): The color data from the sensor, providing visual details of the real-world scene.
Output:
Propagated Foreground Image: An image where the foreground regions have been extended into the unknown areas.
Propagated Background Image: An image where the background regions have been extended into the unknown areas.

复制前景图片(未知区域透明度设置为0,其他设置为1),自底向上创建该前景图片的金字塔,而后自顶向下开始进行模糊处理,到最底层时将未知区域便有了上层模糊处理过的包含前景的颜色,进行该项操作的次数,论文中成文diffusion次数。同样对背景图片进行此操作。

Algorithm 1: Foreground Propagation
Input: Foreground image F, levels l, iterations i
Output: Propagated foreground image F1: S = copy(F)  // Step 1
2: Create pyramid with l levels:
3: for each level from finest to coarsest:
4:   Initialize finest level with known foreground (α = 1)
5:   Set unknown pixels α = 0
6:   for each coarser level:
7:     Apply Gaussian filter to previous finer level
8:     Store in current level
9: end for10: Top-down blurring:
11: for each level from coarsest to finest:
12:   Apply quadratic B-Spline interpolation
13:   Calculate new pixel colors from coarser level
14:   Weight interpolation by α values
15: end for16: Write back blurred colors:
17: for each pixel in F:
18:   if pixel is unknown:
19:     Write blurred color with linear interpolation
20:     Regularize with value n
21:   end if
22: end for23: Repeat steps 1-22 for i iterations24: Return F

Alpha Estimation

目的: Alpha 估计阶段的目标是确定 trimap 未知区域中每个像素的 alpha 值(透明度)。 Alpha 值定义了前景与背景的像素比例,从而实现最终合成图像中的无缝混合。
Input: The stage receives the propagated foreground and background images and the trimap.
Output: It produces an alpha matte, visually showing the transparency levels of the unknown regions, ready for the final compositing step.

Algorithm 2: Alpha Estimation
Input: Color image I, propagated foreground F, propagated background B, neighborhood size n, weight w
Output: Alpha matte α1: for each pixel p in I do
2:   Collect samples F_samples and B_samples from n × n neighborhood
3:   Initialize min_cost to infinity
4:   for each pair (F_i, B_j) in (F_samples, B_samples) do
5:     Estimate α_hat using equation (2)
6:     Calculate color cost C_col using equation (3)
7:     Calculate propagation cost C_pro using equation (4)
8:     Calculate total cost: cost = w * C_col + C_pro
9:     if cost < min_cost then
10:      min_cost = cost
11:      best_pair = (F_i, B_j)
12:   end for
13:   Set α_p = α_hat for best_pair
14: end for15: Set α values for known foreground pixels to 1 and for background pixels to 0
16: Calculate alpha values for virtual objects using 1 - α_p17: Return α

Compositing

目的:最后阶段使用 alpha matte将渲染的虚拟场景和真实场景的彩色图像组合成单个合成图像,以实现无缝混合。
Input: RGB image from the sensor and the rendered RGB image, along with the alpha matte.
Output: Final composite image.

Algorithm 3: Compositing
Input: Color image from real scene C_foreground, color image from virtual scene C_background, alpha matte α
Output: Composite image C_composite1: for each pixel p in C_composite do
2:   α_p = α(p)  # Alpha value at pixel p
3:   C_fg = C_foreground(p)  # Foreground color at pixel p
4:   C_bg = C_background(p)  # Background color at pixel p
5:   C_composite(p) = α_p * C_fg + (1 - α_p) * C_bg  # Calculate composite color
6: end for7: Apply anti-aliasing to C_composite to smooth edges
8: Perform color correction on C_composite if needed
9: Adjust lighting on C_composite for consistency10: Return C_composite

Code(由gpt生成)
Step 1: 3D Rendering
This step involves rendering the color and depth values of the virtual scene. For simplicity, we’ll assume these are already provided as images.

import cv2
import numpy as np# Load color and depth images from the virtual scene
color_image_virtual = cv2.imread('color_virtual.png')
depth_image_virtual = cv2.imread('depth_virtual.png', cv2.IMREAD_UNCHANGED)

Step 2: Adaptive Trimap Generation
Generate a trimap that specifies foreground, background, and unknown regions.

def generate_trimap(depth_image, threshold=10):trimap = np.zeros_like(depth_image)trimap[depth_image < threshold] = 255  # Backgroundtrimap[depth_image > 255 - threshold] = 128  # Foregroundtrimap[(depth_image >= threshold) & (depth_image <= 255 - threshold)] = 0  # Unknownreturn trimapdepth_image = cv2.imread('depth_image.png', cv2.IMREAD_GRAYSCALE)
trimap = generate_trimap(depth_image)

Step 3: Foreground and Background Propagation
Propagate known foreground and background colors to unknown regions.

def propagate_colors(image, trimap, num_levels=4, num_iterations=5):foreground = np.zeros_like(image)background = np.zeros_like(image)foreground[trimap == 128] = image[trimap == 128]background[trimap == 255] = image[trimap == 255]for _ in range(num_iterations):blurred_foreground = cv2.pyrDown(foreground)blurred_foreground = cv2.pyrUp(blurred_foreground)foreground[trimap == 0] = blurred_foreground[trimap == 0]blurred_background = cv2.pyrDown(background)blurred_background = cv2.pyrUp(blurred_background)background[trimap == 0] = blurred_background[trimap == 0]return foreground, backgroundforeground, background = propagate_colors(cv2.imread('color_image.png'), trimap)

Step 4: Alpha Estimation
Estimate the alpha matte based on the propagated foreground and background colors.

def estimate_alpha(image, foreground, background, trimap, weight=0.5):alpha = np.zeros(image.shape[:2], dtype=np.float32)rows, cols = image.shape[:2]for r in range(rows):for c in range(cols):if trimap[r, c] == 0:best_cost = float('inf')best_alpha = 0for i in range(max(0, r-1), min(rows, r+2)):for j in range(max(0, c-1), min(cols, c+2)):if trimap[i, j] in [128, 255]:alpha_value = np.dot(image[r, c] - background[i, j], foreground[i, j] - background[i, j]) / np.linalg.norm(foreground[i, j] - background[i, j])**2color_cost = np.linalg.norm(image[r, c] - (alpha_value * foreground[i, j] + (1 - alpha_value) * background[i, j]))propagation_cost = (i + j) / 2  # Simplified propagation costtotal_cost = weight * color_cost + (1 - weight) * propagation_costif total_cost < best_cost:best_cost = total_costbest_alpha = alpha_valuealpha[r, c] = best_alphaelif trimap[r, c] == 128:alpha[r, c] = 1elif trimap[r, c] == 255:alpha[r, c] = 0return alphaalpha_matte = estimate_alpha(cv2.imread('color_image.png'), foreground, background, trimap)

Step 5: Compositing
Combine the color images of the rendered and real scene into a single image using the alpha matte.

def composite_images(foreground_image, background_image, alpha_matte):composite_image = np.zeros_like(foreground_image)alpha_matte_expanded = np.expand_dims(alpha_matte, axis=2)composite_image = alpha_matte_expanded * foreground_image + (1 - alpha_matte_expanded) * background_imagereturn composite_imagecolor_image_real = cv2.imread('color_image_real.png')
composite_image = composite_images(color_image_real, color_image_virtual, alpha_matte)# Save or display the composite image
cv2.imwrite('composite_image.png', composite_image)
cv2.imshow('Composite Image', composite_image)
cv2.waitKey(0)
cv2.destroyAllWindows()

相关文章:

  • 北京网站建设多少钱?
  • 辽宁网页制作哪家好_网站建设
  • 高端品牌网站建设_汉中网站制作
  • 上升探索WebKit的奥秘:打造高效、兼容的现代网页应用
  • 指针和const
  • 【Material-UI】Button 组件中的图标和标签按钮(Buttons with Icons and Label)详解
  • 九/十:C语言-扫雷游戏实现与函数递归
  • 初始Spring与SpringIOC容器
  • 如何根据 EcoVadis 审核的评分标准改进企业社会责任表现?
  • 【C++】| STL算法库详解 | 修改序列的算法、非修改序列的算法、排序和相关操作、数值算法
  • 【八股文】网络基础
  • rename函数报Invalid cross-device link
  • Python爬虫技术 第33节 未来趋势和技术发展
  • 新手学习Gazebo+ros仿真控制小车-----易错和自己理解
  • GPT对话代码库——串口接收16进制数据包转换成十进制输出
  • 谷粒商城实战笔记-119~121-全文检索-ElasticSearch-mapping
  • C++中string类常用函数的用法介绍
  • K个一组翻转链表(LeetCode)
  • 「译」Node.js Streams 基础
  • Angular 响应式表单之下拉框
  • bootstrap创建登录注册页面
  • Median of Two Sorted Arrays
  • Travix是如何部署应用程序到Kubernetes上的
  • vue的全局变量和全局拦截请求器
  • 测试开发系类之接口自动化测试
  • 回顾2016
  • 基于Vue2全家桶的移动端AppDEMO实现
  • 浅析微信支付:申请退款、退款回调接口、查询退款
  • 深度学习入门:10门免费线上课程推荐
  • 思否第一天
  • 网页视频流m3u8/ts视频下载
  • 为视图添加丝滑的水波纹
  • 协程
  • #LLM入门|Prompt#1.8_聊天机器人_Chatbot
  • ${factoryList }后面有空格不影响
  • (16)UiBot:智能化软件机器人(以头歌抓取课程数据为例)
  • (9)目标检测_SSD的原理
  • (Java)【深基9.例1】选举学生会
  • (M)unity2D敌人的创建、人物属性设置,遇敌掉血
  • (附源码)spring boot校园拼车微信小程序 毕业设计 091617
  • (六)DockerCompose安装与配置
  • (转)shell中括号的特殊用法 linux if多条件判断
  • (转)拼包函数及网络封包的异常处理(含代码)
  • (转贴)用VML开发工作流设计器 UCML.NET工作流管理系统
  • .net FrameWork简介,数组,枚举
  • .net php 通信,flash与asp/php/asp.net通信的方法
  • .Net Web窗口页属性
  • .NET/C# 在代码中测量代码执行耗时的建议(比较系统性能计数器和系统时间)
  • .Net6使用WebSocket与前端进行通信
  • .Net7 环境安装配置
  • .sys文件乱码_python vscode输出乱码
  • ??eclipse的安装配置问题!??
  • @EnableWebMvc介绍和使用详细demo
  • [<MySQL优化总结>]
  • [51nod1610]路径计数
  • [acm算法学习] 后缀数组SA
  • [bug总结]: Feign调用GET请求找不到请求体实体类
  • [CF543A]/[CF544C]Writing Code