当前位置: 首页 > news >正文

Hammersley–Clifford theorem

Hammersley–Clifford theorem

https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford_theorem

Hammersley–Clifford theorem

From Wikipedia, the free encyclopedia

Jump to navigation Jump to search

The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics, that gives necessary and sufficient conditions under which a strictly positive probability distribution[clarification needed] can be represented as a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields.[1] It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph.

The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin[2] and Frank Spitzer[3] in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford who proved the equivalence in an unpublished paper in 1971.[4][5] Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett,[6] Preston[7] and Sherman[8] in 1973, with a further proof by Julian Besag in 1974.[9]

 

Contents

  • 1 Proof Outline
  • 2 See also
  • 3 Notes
  • 4 Further reading

Proof Outline

A simple Markov network for demonstrating that any Gibbs random field satisfies every Markov property.

It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following:

In the image to the right, a Gibbs random field over the provided graph has the form Pr ( A , B , C , D , E , F ) ∝ f 1 ( A , B , D ) f 2 ( A , C , D ) f 3 ( C , D , F ) f 4 ( C , E , F ) {\displaystyle \Pr(A,B,C,D,E,F)\propto f_{1}(A,B,D)f_{2}(A,C,D)f_{3}(C,D,F)f_{4}(C,E,F)} {\displaystyle \Pr(A,B,C,D,E,F)\propto f_{1}(A,B,D)f_{2}(A,C,D)f_{3}(C,D,F)f_{4}(C,E,F)}. If variables C {\displaystyle C} C and D {\displaystyle D} D are fixed, then the global Markov property requires that: A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} {\displaystyle A,B\perp E,F|C,D} (see conditional independence), since C , D {\displaystyle C,D} {\displaystyle C,D} forms a barrier between A , B {\displaystyle A,B} A,B and E , F {\displaystyle E,F} {\displaystyle E,F}.

With C {\displaystyle C} C and D {\displaystyle D} D constant, Pr ( A , B , E , F | C , D ) ∝ [ f 1 ( A , B , D ) f 2 ( A , C , D ) ] ⋅ [ f 3 ( C , D , F ) f 4 ( C , E , F ) ] = g 1 ( A , B ) g 2 ( E , F ) {\displaystyle \Pr(A,B,E,F|C,D)\propto [f_{1}(A,B,D)f_{2}(A,C,D)]\cdot [f_{3}(C,D,F)f_{4}(C,E,F)]=g_{1}(A,B)g_{2}(E,F)} {\displaystyle \Pr(A,B,E,F|C,D)\propto [f_{1}(A,B,D)f_{2}(A,C,D)]\cdot [f_{3}(C,D,F)f_{4}(C,E,F)]=g_{1}(A,B)g_{2}(E,F)} where g 1 ( A , B ) = f 1 ( A , B , D ) f 2 ( A , C , D ) {\displaystyle g_{1}(A,B)=f_{1}(A,B,D)f_{2}(A,C,D)} {\displaystyle g_{1}(A,B)=f_{1}(A,B,D)f_{2}(A,C,D)} and g 2 ( E , F ) = f 3 ( C , D , F ) f 4 ( C , E , F ) {\displaystyle g_{2}(E,F)=f_{3}(C,D,F)f_{4}(C,E,F)} {\displaystyle g_{2}(E,F)=f_{3}(C,D,F)f_{4}(C,E,F)}. This implies that A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} {\displaystyle A,B\perp E,F|C,D}.

To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proven:

Lemma 1 provides a means for combining factorizations as shown in this diagram. Note that in this image, the overlap between sets is ignored.

Lemma 1

Let U {\displaystyle U} U denote the set of all random variables under consideration, and let Θ , Φ 1 , Φ 2 , … , Φ n ⊆ U {\displaystyle \Theta ,\Phi _{1},\Phi _{2},\dots ,\Phi _{n}\subseteq U} {\displaystyle \Theta ,\Phi _{1},\Phi _{2},\dots ,\Phi _{n}\subseteq U} and Ψ 1 , Ψ 2 , … , Ψ m ⊆ U {\displaystyle \Psi _{1},\Psi _{2},\dots ,\Psi _{m}\subseteq U} {\displaystyle \Psi _{1},\Psi _{2},\dots ,\Psi _{m}\subseteq U} denote arbitrary sets of variables. (Here, given an arbitrary set of variables X {\displaystyle X} X, X {\displaystyle X} X will also denote an arbitrary assignment to the variables from X {\displaystyle X} X.)

If

Pr ( U ) = f ( Θ ) ∏ i = 1 n g i ( Φ i ) = ∏ j = 1 m h j ( Ψ j ) {\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})} {\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})}

for functions f , g 1 , g 2 , … g n {\displaystyle f,g_{1},g_{2},\dots g_{n}} {\displaystyle f,g_{1},g_{2},\dots g_{n}} and h 1 , h 2 , … , h m {\displaystyle h_{1},h_{2},\dots ,h_{m}} {\displaystyle h_{1},h_{2},\dots ,h_{m}}, then there exist functions h 1 ′ , h 2 ′ , … , h m ′ {\displaystyle h'_{1},h'_{2},\dots ,h'_{m}} {\displaystyle h'_{1},h'_{2},\dots ,h'_{m}} and g 1 ′ , g 2 ′ , … , g n ′ {\displaystyle g'_{1},g'_{2},\dots ,g'_{n}} {\displaystyle g'_{1},g'_{2},\dots ,g'_{n}} such that

Pr ( U ) = ( ∏ j = 1 m h j ′ ( Θ ∩ Ψ j ) ) ( ∏ i = 1 n g i ′ ( Φ i ) ) {\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}} {\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}}

In other words, ∏ j = 1 m h j ( Ψ j ) {\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})} {\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})} provides a template for further factorization of f ( Θ ) {\displaystyle f(\Theta )} f(\Theta ).

 
Proof of Lemma 1

The clique formed by vertices x 1 {\displaystyle x_{1}} x_{1}, x 2 {\displaystyle x_{2}} x_{2}, and x 3 {\displaystyle x_{3}} x_{3}, is the intersection of { x 1 } ∪ ∂ x 1 {\displaystyle \{x_{1}\}\cup \partial x_{1}} {\displaystyle \{x_{1}\}\cup \partial x_{1}}, { x 2 } ∪ ∂ x 2 {\displaystyle \{x_{2}\}\cup \partial x_{2}} {\displaystyle \{x_{2}\}\cup \partial x_{2}}, and { x 3 } ∪ ∂ x 3 {\displaystyle \{x_{3}\}\cup \partial x_{3}} {\displaystyle \{x_{3}\}\cup \partial x_{3}}.

Lemma 1 provides a means of combining two different factorizations of Pr ( U ) {\displaystyle \Pr(U)} {\displaystyle \Pr(U)}. The local Markov property implies that for any random variable x ∈ U {\displaystyle x\in U} x\in U, that there exists factors f x {\displaystyle f_{x}} f_{x} and f − x {\displaystyle f_{-x}} {\displaystyle f_{-x}} such that:

Pr ( U ) = f x ( x , ∂ x ) f − x ( U ∖ { x } ) {\displaystyle \Pr(U)=f_{x}(x,\partial x)f_{-x}(U\setminus \{x\})} {\displaystyle \Pr(U)=f_{x}(x,\partial x)f_{-x}(U\setminus \{x\})}

where ∂ x {\displaystyle \partial x} {\displaystyle \partial x} are the neighbors of node x {\displaystyle x} x. Applying Lemma 1 repeatedly eventually factors Pr ( U ) {\displaystyle \Pr(U)} {\displaystyle \Pr(U)} into a product of clique potentials (see the image on the right).

End of Proof

See also

  • Markov random field
  • Conditional random field

Notes

  1.  
  • Lafferty, John D.; Mccallum, Andrew (2001). "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data". ICML. Retrieved 14 December 2014. by the fundamental theorem of random fields (Hammersley & Clifford, 1971)
  •  
  • Dobrushin, P. L. (1968), "The Description of a Random Field by Means of Conditional Probabilities and Conditions of Its Regularity", Theory of Probability and its Applications, 13 (2): 197&ndash, 224, doi:10.1137/1113026
  •  
  • Spitzer, Frank (1971), "Markov Random Fields and Gibbs Ensembles", The American Mathematical Monthly, 78 (2): 142&ndash, 154, doi:10.2307/2317621, JSTOR 2317621
  •  
  • Hammersley, J. M.; Clifford, P. (1971), Markov fields on finite graphs and lattices (PDF)
  •  
  • Clifford, P. (1990), "Markov random fields in statistics", in Grimmett, G. R.; Welsh, D. J. A., Disorder in Physical Systems: A Volume in Honour of John M. Hammersley, Oxford University Press, pp. 19–32, ISBN 0-19-853215-6, MR 1064553, retrieved 2009-05-04
  •  
  • Grimmett, G. R. (1973), "A theorem about random fields", Bulletin of the London Mathematical Society, 5 (1): 81&ndash, 84, doi:10.1112/blms/5.1.81, MR 0329039
  •  
  • Preston, C. J. (1973), "Generalized Gibbs states and Markov random fields", Advances in Applied Probability, 5 (2): 242&ndash, 261, doi:10.2307/1426035, JSTOR 1426035, MR 0405645
  •  
  • Sherman, S. (1973), "Markov random fields and Gibbs random fields", Israel Journal of Mathematics, 14 (1): 92&ndash, 103, doi:10.1007/BF02761538, MR 0321185
  •  
  • Besag, J. (1974), "Spatial interaction and the statistical analysis of lattice systems", Journal of the Royal Statistical Society, Series B, 36 (2): 192–236, JSTOR 2984812, MR 0373208

相关文章:

  • Sobol sequence generator
  • PaniniProjection
  • Shadowmask
  • UE4 How To use Colored Translucent Shadows
  • EvaluateSurfelMaterial.usf
  • FilterPixelShader.usf
  • 晕影Vignette
  • 自定义AssetBundle包扩展名
  • 【已解决】C#中的#ifdef
  • Signed Distance Fields in Real-time Rendering
  • UE4学习笔记(六): 次世代的移动平台渲染技术
  • UnrealEngine4 PBR Shading Model 概述
  • Asset Bundles vs. Resources: A Memory Showdown
  • book 书籍下载
  • 因为UE4开源的缘故,所以一开始还从它入手。相关的ppt和notebook可以从下面的链接下载,同期的黑色行动2(black op2)的PBR使用也是很有参考价值的,加上本文里也有OP2的IBL近似方
  • 分享的文章《人生如棋》
  • [LeetCode] Wiggle Sort
  • CSS3 聊天气泡框以及 inherit、currentColor 关键字
  • CSS选择器——伪元素选择器之处理父元素高度及外边距溢出
  • JavaScript异步流程控制的前世今生
  • Linux快速配置 VIM 实现语法高亮 补全 缩进等功能
  • MySQL常见的两种存储引擎:MyISAM与InnoDB的爱恨情仇
  • Node项目之评分系统(二)- 数据库设计
  • PHP面试之三:MySQL数据库
  • ucore操作系统实验笔记 - 重新理解中断
  • v-if和v-for连用出现的问题
  • 每个JavaScript开发人员应阅读的书【1】 - JavaScript: The Good Parts
  • 微信小程序实战练习(仿五洲到家微信版)
  • 微信支付JSAPI,实测!终极方案
  • 在Unity中实现一个简单的消息管理器
  • - 转 Ext2.0 form使用实例
  • ​LeetCode解法汇总2182. 构造限制重复的字符串
  • #1015 : KMP算法
  • #Z2294. 打印树的直径
  • $(function(){})与(function($){....})(jQuery)的区别
  • (04)Hive的相关概念——order by 、sort by、distribute by 、cluster by
  • (20)目标检测算法之YOLOv5计算预选框、详解anchor计算
  • (day 12)JavaScript学习笔记(数组3)
  • (二)springcloud实战之config配置中心
  • (二)基于wpr_simulation 的Ros机器人运动控制,gazebo仿真
  • (经验分享)作为一名普通本科计算机专业学生,我大学四年到底走了多少弯路
  • (深度全面解析)ChatGPT的重大更新给创业者带来了哪些红利机会
  • (十六)一篇文章学会Java的常用API
  • (源码版)2024美国大学生数学建模E题财产保险的可持续模型详解思路+具体代码季节性时序预测SARIMA天气预测建模
  • (转) SpringBoot:使用spring-boot-devtools进行热部署以及不生效的问题解决
  • (转)大型网站架构演变和知识体系
  • (转)原始图像数据和PDF中的图像数据
  • .NET I/O 学习笔记:对文件和目录进行解压缩操作
  • .net redis定时_一场由fork引发的超时,让我们重新探讨了Redis的抖动问题
  • .net websocket 获取http登录的用户_如何解密浏览器的登录密码?获取浏览器内用户信息?...
  • .net 提取注释生成API文档 帮助文档
  • .NET/C# 中设置当发生某个特定异常时进入断点(不借助 Visual Studio 的纯代码实现)
  • .net打印*三角形
  • .NET教程 - 字符串 编码 正则表达式(String Encoding Regular Express)
  • [2009][note]构成理想导体超材料的有源THz欺骗表面等离子激元开关——