<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Research Diagram</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            padding: 20px;
            background: #f5f5f5;
        }

        /* Research Diagram Styles */
        .container {
            max-width: 1800px;
            margin: 0 auto;
            width: 100%;
        }

        .top-concept {
            background: #ffffff;
            border: 4px solid #6b7c8e;
            padding: 25px;
            margin-bottom: 20px;
            border-radius: 4px;
            box-shadow: 0 2px 8px rgba(0,0,0,0.1);
            text-align: center;
        }

        .top-concept h1 {
            font-size: 30px;
            color: #2c3e50;
            margin-bottom: 12px;
            font-weight: 600;
        }

        .top-concept .subtitle {
            font-size: 17px;
            color: #444;
            margin: 8px 0;
        }

        .top-concept .theory-question {
            font-size: 16px;
            color: #555;
            font-style: italic;
            margin: 8px 0;
        }

        .top-concept .research-flow {
            font-size: 15px;
            color: #5a7a5a;
            margin-top: 15px;
            font-weight: 500;
        }

        .arrows-down {
            display: flex;
            justify-content: space-around;
            align-items: center;
            margin: 15px 0 15px 0;
            height: 40px;
        }

        .arrow-down {
            width: 2px;
            height: 35px;
            background: #6b7c8e;
            position: relative;
        }

        .arrow-down::after {
            content: '';
            position: absolute;
            bottom: -8px;
            left: 50%;
            transform: translateX(-50%);
            width: 0;
            height: 0;
            border-left: 8px solid transparent;
            border-right: 8px solid transparent;
            border-top: 10px solid #6b7c8e;
        }

        .categories-grid {
            display: flex;
            flex-direction: row;
            justify-content: space-between;
            gap: 12px;
            margin-bottom: 40px;
            max-width: 100%;
            width: 100%;
            position: relative;
        }

        .category-box {
            background: white;
            border: 3px solid #6b7c8e;
            border-radius: 4px;
            padding: 12px;
            box-shadow: 0 2px 6px rgba(0,0,0,0.08);
            flex: 1;
            min-width: 0;
            max-width: 25%;
        }

        .category-header {
            font-size: 18px;
            font-weight: bold;
            color: #2c3e50;
            margin-bottom: 8px;
            padding-bottom: 8px;
            border-bottom: 2px solid #6b7c8e;
        }

        .category-tagline {
            font-size: 12px;
            color: #666;
            font-style: italic;
            margin-bottom: 12px;
        }

        .subcategory {
            margin-bottom: 12px;
            padding-bottom: 12px;
            border-bottom: 1px solid #000;
        }

        .subcategory:last-child {
            margin-bottom: 0;
            padding-bottom: 0;
            border-bottom: none;
        }

        .subcategory-title {
            font-size: 14px;
            font-style: italic;
            color: #2c3e50;
            margin-bottom: 6px;
            font-weight: 500;
        }

        .paper-refs {
            font-size: 12px;
            color: #2c3e50;
            line-height: 1.7;
        }

        .paper-ref {
            color: #5a7a5a;
            text-decoration: none;
            font-weight: 500;
            display: block;
            margin: 3px 0;
        }

        .paper-ref:hover {
            text-decoration: underline;
            color: #3d5c3d;
        }

        .venue {
            color: #71b07b;
            font-weight: 600;
            font-size: 11px;
        }

        .plus-sign {
            color: #2c3e50;
            margin-right: 3px;
        }

        @media (max-width: 768px) {
            .categories-grid {
                flex-direction: column;
            }

            .category-box {
                min-width: auto;
                width: 100%;
                max-width: 100%;
            }

            .arrows-down {
                display: none;
            }
        }
    </style>
</head>
<body>
    <div class="container">
        <div class="top-concept">
            <h1>Causal & Compositional Representations</h1>
            <p class="subtitle">How should we represent various types of concepts?</p>
            <p class="theory-question">How to capture the compositional and causal structures underlying these concepts?</p>
            <p class="research-flow">These research areas share a unified goal: building interpretable, identifiable representations</p>
        </div>

        <div class="arrows-down">
            <div class="arrow-down"></div>
            <div class="arrow-down"></div>
            <div class="arrow-down"></div>
            <div class="arrow-down"></div>
        </div>

        <div class="categories-grid">
            <div class="category-box">
                <h2 class="category-header">Causal Representation Theory</h2>
                <p class="category-tagline">Theoretical foundations and identifiability guarantees</p>
                
                <div class="subcategory">
                    <p class="subcategory-title">Causal Representation Learning:</p>
                    <div class="paper-refs">
                        <a href="https://openreview.net/pdf?id=cW9Ttnm1aC" class="paper-ref">Nonparametric Identification <span class="venue">(ICML 2025)</span></a>
                        <a href="https://arxiv.org/pdf/2402.05052" class="paper-ref">Learning from Multiple Distributions <span class="venue">(ICML 2024)</span></a>
                        <a href="https://openreview.net/attachment?id=S8lfepB2fz&name=pdf" class="paper-ref">Nonparametric Mixing <span class="venue">(AISTATS 2025)</span></a>
                        <a href="https://arxiv.org/pdf/2503.00639" class="paper-ref">Sufficient Changes & Sparse Mixing <span class="venue">(ICLR 2025)</span></a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Counterfactual Reasoning:</p>
                    <div class="paper-refs">
                        <a href="https://arxiv.org/pdf/2306.05751" class="paper-ref">Nonlinear Quantile Regression <span class="venue">(arXiv 2024)</span></a>
                    </div>
                </div>
            </div>

            <div class="category-box">
                <h2 class="category-header">Controllable Generation</h2>
                <p class="category-tagline">Controllable generative models guided by causal principles and identification guarantees</p>
                
                <div class="subcategory">
                    <p class="subcategory-title">Multi-Domain Image Generation:</p>
                    <div class="paper-refs">
                        <a href="https://openreview.net/pdf?id=U2g8OGONA_V" class="paper-ref">Identifiability Guarantees <span class="venue">(ICLR 2023 Spotlight)</span></a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Unpaired Image Translation:</p>
                    <div class="paper-refs">
                        <a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Xie_Unpaired_Image-to-Image_Translation_With_Shortest_Path_Regularization_CVPR_2023_paper.pdf" class="paper-ref">Shortest Path Regularization <span class="venue">(CVPR 2023)</span></a>
                        <a href="https://openreview.net/pdf?id=RNZ8JOmNaV4" class="paper-ref">Density Changing Regularization <span class="venue">(NeurIPS 2022)</span></a>
                        <a href="https://openaccess.thecvf.com/content/CVPR2022/papers/Xu_Maximum_Spatial_Perturbation_Consistency_for_Unpaired_Image-to-Image_Translation_CVPR_2022_paper.pdf" class="paper-ref">Maximum Spatial Perturbation <span class="venue">(CVPR 2022)</span></a>
                        <a href="https://openaccess.thecvf.com/content/ICCV2021/papers/Xie_Unaligned_Image-to-Image_Translation_by_Learning_to_Reweight_ICCV_2021_paper.pdf" class="paper-ref">Learning to Reweight <span class="venue">(ICCV 2021)</span></a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Efficient Generation:</p>
                    <div class="paper-refs">
                        <a href="https://arxiv.org/pdf/2306.12511.pdf" class="paper-ref">Semi-Implicit Denoising (SIDDMs) <span class="venue">(NeurIPS 2023)</span></a>
                    </div>
                </div>
            </div>

            <div class="category-box">
                <h2 class="category-header">Vision-Language Concepts</h2>
                <p class="category-tagline">Extending identifiability to multimodal representations</p>
                
                <div class="subcategory">
                    <p class="subcategory-title">Modular Vision-Language Alignment:</p>
                    <div class="paper-refs">
                        <a href="https://openaccess.thecvf.com/content/CVPR2025/papers/Xie_SmartCLIP_Modular_Vision-language_Alignment_with_Identification_Guarantees_CVPR_2025_paper.pdf" class="paper-ref">SmartCLIP <span class="venue">(CVPR 2025 Highlight)</span></a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Controllable Generation with Concepts:</p>
                    <div class="paper-refs">
                        <a href="https://openreview.net/pdf?id=hUHRTaTfvZ" class="paper-ref">Vision & Language Concepts <span class="venue">(ICML 2025)</span></a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Text-Guided Image Manipulation:</p>
                    <div class="paper-refs">
                        <a href="https://arxiv.org/pdf/2212.05034.pdf" class="paper-ref">SmartBrush <span class="venue">(CVPR 2023 Highlight)</span></a>
                        <a href="https://arxiv.org/pdf/2312.03771" class="paper-ref">DreamInpainter <span class="venue">(arXiv 2024)</span></a>
                    </div>
                </div>
            </div>

            <div class="category-box">
                <h2 class="category-header">Post-Training of Models</h2>
                <p class="category-tagline">Optimizing and interpreting trained generative models</p>
                
                <div class="subcategory">
                    <p class="subcategory-title">Interpretability of Generative Models:</p>
                    <div class="paper-refs">
                        <a href="#" class="paper-ref">Understanding Latent Structures</a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Policy Optimization for Diffusion Models:</p>
                    <div class="paper-refs">
                        <a href="#" class="paper-ref">RL for Diffusion Policies</a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Policy Optimization for Language Models:</p>
                    <div class="paper-refs">
                        <a href="#" class="paper-ref">Advanced RLHF Techniques</a>
                    </div>
                </div>

                <div class="subcategory">
                    <p class="subcategory-title">Reinforcement Learning for Visual Generation:</p>
                    <div class="paper-refs">
                        <a href="#" class="paper-ref">RL-Based Fine-Tuning</a>
                    </div>
                </div>
            </div>
        </div>
    </div>
</body>
</html>
