# 26_v4 実装仕様書 Codex版 - Instagram automation SaaS v4.1

作成日: 2026-04-18  
対象ADR: `23_v4_Decision_ADR_v4_final.html`  
採用Decision: C / Supabase Tokyo + Cloudflare Workers + PII minimization  
実装責任者想定: 子安氏  
本書の目的: ADRではなく、実装者がDDL、RPC、Edge Functions、Worker、テスト、運用まで落とせる仕様にする。

## 0. 実装原則

v4.1は「PIIを完全に国外に出さない」とは主張しない。Class 1はSupabase Tokyoに保存し、Cloudflare/Supabase Edge/Anthropic/Sentry/Better Stackへ渡るデータはClass 2またはClass 3に限定する。ただし画像はClass 1を内包し得るため、画像は「PII-free判定済みの料理画像のみ」をClass 3とする。

主要なv3失敗原因は、以下の形でv4ではDB契約に落とす。

- Cronの二重claim: `claim_jobs()` + `FOR UPDATE SKIP LOCKED` + lease CASで解消する。
- IG外部副作用の二重実行: `external_operations` ledgerで、job leaseとは別に外部操作単位のidempotencyを持つ。
- audit fork: `audit_chains`行を`FOR UPDATE`して、seqとprev_hashを同一transactionで確定する。
- KV rate limitの非atomic性: Postgres `rate_limit_buckets`でatomic admission gateにする。
- 承認URLの二重POST: token hash + CSRF token + atomic state transitionでsingle-use化する。
- 画像由来PII漏出: EXIF/GPS strip、OCR/text density、signage/menu/face gateを通るまでClaude/R2へ送らない。

公式仕様で確認した実行制約:

- Supabase Edge Functions hosted platformは、Paid planでwall clock 400s、request idle timeout 150s、CPU 2s、memory 256MB。長時間処理は`EdgeRuntime.waitUntil()`を使い、処理単位はlease heartbeatで分割する。
- Supabaseは`pg_cron` + `pg_net.http_post`でEdge Functionを定期invokeできる。
- Supabase Edge Functionsは通常最寄りregion実行だが、`x-region: ap-northeast-1`または`forceFunctionRegion`でTokyo実行を指定できる。Class 1を扱うEdge Function呼び出しは必ずTokyo指定し、`SB_REGION`を検査する。
- Cloudflare R2 APAC location hintはbest effortであり、Tokyo residency保証ではない。
- Anthropic APIの商用/API dataは標準30日保持、API dataは通常trainingに使われない。ただしzero data retentionは別契約・設定が必要。

---

## 1. Data Classification & PII Handling

### 1.1 Class 1: PII / Tokyo-only

保存先はSupabase Postgres Tokyo、Supabase Auth Tokyo、Supabase Vaultのみ。

Class 1 items:

- `stores.name`, `stores.legal_name`, `stores.address`, `stores.phone`, `stores.public_email`
- `stores.ig_user_id`, `instagram_tokens.fb_user_id`, `instagram_tokens.page_id`, `instagram_tokens.ig_user_id`
- IG access token / refresh token / page token / system user token。実体はSupabase Vault、DBは`vault_secret_id`のみ保持
- `profiles.email`, `profiles.display_name`
- `store_memberships.user_id`
- approver email、approval token hash、CSRF token hash
- `posts.caption_text`, `posts.hashtags`, `posts.dish_names`, `posts.caption_sha256`
- raw audit actor data: `audit_logs.actor_id`, `audit_logs.ip`, `audit_logs.user_agent`
- login IP、failed login metadata
- raw image bytes before sanitization、EXIF/GPS、OCR text、看板/メニュー/顔の検出結果

Class 1 handling:

- Class 1をSentry、Better Stack、Cloudflare persisted logs、Anthropicに送信してはならない。
- Edge FunctionsでClass 1を処理する場合、呼び出し側は`x-region: ap-northeast-1`を付与し、関数内で`Deno.env.get("SB_REGION") === "ap-northeast-1"`を検証する。
- Cloudflare WorkersはAPI GatewayとしてClass 1を一時処理するが、ログ・KV・Durable Objects・Analyticsには保存しない。request pathにはstore slug/idを含めない。

### 1.2 Class 2: Operational Data

国外vendorへ送信可。ただしredaction必須。

Class 2 items:

- error code、exception class、stack frame file名
- redacted route template: `/api/stores/:store_id/posts/:post_id`
- hashed store id
- latency、status code、retry count、job kind
- sanitized request id、cf_ray
- caption summary: hash、文字数、hashtag countのみ
- IP aggregate: IPv4 `/24`相当、IPv6 `/48`相当

禁止:

- raw caption
- raw store name/address/phone/email
- raw IP
- raw approval token
- raw image OCR text
- token value

### 1.3 Class 3: PII-free Image

Class 3は「以下を通過した料理画像のみ」。

- EXIF stripped
- GPS absentまたは除去済み
- OCR text density below threshold
- signage/menu/address/phone-like text not detected
- face/person detection not detected
- manual overrideなしで`media_assets.preflight_status = 'clean'`

処理済み画像のみR2 APAC bucketへ配置する。raw uploadは永続化しない。もしpipelineが失敗した場合、画像は`blocked`にしてClaude/R2/IGへ進めない。

### 1.4 Redaction Functions

Cloudflare Workers / Edge Functions共通の`redaction.ts`として実装する。

```ts
// supabase/functions/_shared/redaction.ts
const EMAIL_RE = /[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}/gi
const JP_PHONE_RE = /(?:\+81[-\s]?)?0\d{1,4}[-\s]?\d{1,4}[-\s]?\d{3,4}/g
const URL_RE = /\bhttps?:\/\/[^\s]+/gi
const MENTION_RE = /(^|\s)@[A-Za-z0-9._]{2,30}\b/g

export async function hmacStoreId(storeId: string, pepper: string, version = "v1"): Promise<string> {
  const key = await crypto.subtle.importKey(
    "raw",
    new TextEncoder().encode(pepper),
    { name: "HMAC", hash: "SHA-256" },
    false,
    ["sign"],
  )
  const sig = await crypto.subtle.sign("HMAC", key, new TextEncoder().encode(storeId))
  const hex = [...new Uint8Array(sig)].map((b) => b.toString(16).padStart(2, "0")).join("")
  return `store_anon_${version}_${hex.slice(0, 24)}`
}

export function redactIp(ip: string | null): string | null {
  if (!ip) return null
  if (/^\d{1,3}(\.\d{1,3}){3}$/.test(ip)) {
    const p = ip.split(".")
    return `${p[0]}.${p[1]}.${p[2]}.0`
  }
  if (ip.includes(":")) {
    const parts = ip.split(":")
    return `${parts.slice(0, 3).join(":")}::/48`
  }
  return "unknown"
}

export async function maskCaptionForLogs(caption: string): Promise<{
  masked: string
  sha256: string
  chars: number
  hashtagCount: number
}> {
  const bytes = new TextEncoder().encode(caption)
  const digest = await crypto.subtle.digest("SHA-256", bytes)
  const sha256 = [...new Uint8Array(digest)].map((b) => b.toString(16).padStart(2, "0")).join("")
  const stripped = caption
    .replace(EMAIL_RE, "[email]")
    .replace(JP_PHONE_RE, "[phone]")
    .replace(URL_RE, "[url]")
    .replace(MENTION_RE, "$1[mention]")
  const hashtagCount = (caption.match(/#[^\s#]+/g) ?? []).length
  return {
    masked: `[caption sha256=${sha256.slice(0, 16)} chars=${caption.length} hashtags=${hashtagCount}] ${stripped.slice(0, 40)}`,
    sha256,
    chars: caption.length,
    hashtagCount,
  }
}

export function routeTemplate(pathname: string): string {
  return pathname
    .replace(/[0-9a-f]{8}-[0-9a-f-]{27,}/gi, ":uuid")
    .replace(/\/stores\/[^/]+/g, "/stores/:store_id")
    .replace(/\/posts\/[^/]+/g, "/posts/:post_id")
    .replace(/\/approval\/[^/]+/g, "/approval/:token")
}
```

### 1.5 Image Preprocessing Pipeline

Pipeline entrypoint: `POST /functions/v1/media-preflight` with `x-region: ap-northeast-1`.

Flow:

1. Receive multipart stream. Reject if over 12MB, non JPEG/PNG/WebP, or dimensions > 4096px.
2. Decode metadata only. If EXIF GPS exists, set `gps_present=true`.
3. Re-encode image into JPEG/WebP with metadata stripped. Raw bytes never written to disk except `/tmp` during Edge Function execution and deleted in `finally`.
4. Run OCR/text detection on stripped image.
5. Block if OCR detects phone/email/address-like text, menu density, signboard words, or text area ratio > 12%.
6. Block if face/person detector confidence > 0.70.
7. If clean, upload stripped image to R2 APAC and insert `media_assets`.
8. If blocked, insert `media_assets` with `preflight_status='blocked'`, no R2 key, and show user manual guidance.

Minimum detector contract:

```ts
export interface ImagePreflightResult {
  status: "clean" | "blocked" | "needs_manual"
  strippedBytes: Uint8Array
  mimeType: "image/jpeg" | "image/webp" | "image/png"
  width: number
  height: number
  sha256: string
  gpsPresent: boolean
  exifStripped: boolean
  ocrTextHash?: string
  textAreaRatio: number
  signageConfidence: number
  menuConfidence: number
  faceConfidence: number
  blockReasons: string[]
}
```

Implementation trade-off:

- MVPではOCR false positiveを許容し、疑わしい画像はblockする。
- `needs_manual`はClaude送信不可。店舗管理者が画像を差し替えるか、adminがClass 1扱いで承認し、Claudeを使わず手動captionへ切替える。
- OCR engineがtimeoutした場合はfail-open禁止。`blocked: ["ocr_timeout"]`。

### 1.6 Anthropic Claude API Contract

Claudeへ送信してよいfields:

```json
{
  "model": "claude-haiku-4-5-20251001",
  "max_tokens": 1024,
  "system": {
    "brand_voice": {
      "tone": "casual|premium|friendly",
      "personality": "redacted controlled vocabulary only",
      "first_person": "当店|私たち",
      "language_style": "ですます調"
    },
    "caption_config": {
      "max_length": 300,
      "opening_styles": ["季節感", "料理描写"],
      "cta_examples": ["ご予約はプロフィールから"]
    },
    "hashtag_config": {
      "always_include": ["#ランチ", "#ディナー"],
      "max_per_post": 12
    },
    "prohibited": ["raw configured prohibited phrases after PII masking"],
    "method_version": "method-v1"
  },
  "messages": [
    {
      "role": "user",
      "content": [
        { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "<stripped-clean-image>" } },
        { "type": "text", "text": "PII-free料理画像からInstagram投稿文をJSONで生成してください。店舗名、住所、電話番号、個人名は出力しないでください。" }
      ]
    }
  ],
  "metadata": {
    "store_hash": "store_anon_v1_xxx",
    "post_id": "uuid",
    "image_sha256": "sha256"
  }
}
```

Claudeへ送信禁止:

- 店舗名、住所、電話、メール
- raw store_id
- raw recent caption
- raw OCR text
- EXIF/GPS
- approval email
- IG user id / page id / token
- audit actor id / IP / UA

Retention disclosure:

- Anthropic API standard retentionは30日。
- API dataは通常trainingに使われない。ただし契約で明示し、zero data retentionが必要なら別途Anthropic承認済みAPI keyを使う。
- 契約書にはAnthropic PBC、米国処理、画像+redacted prompt、最大30日保持、no-trainingを明記する。

---

## 2. SQL DDL

### 2.1 Migration Ordering

- `0001_extensions_types.sql`
- `0002_core_tables.sql`
- `0003_indexes_constraints.sql`
- `0004_rls.sql`
- `0005_rpc_jobs_external_ops.sql`
- `0006_rpc_audit_erasure_rate_limit.sql`
- `0007_pg_cron_dispatch.sql`
- `0008_vault_seed_and_feature_flags.sql`
- `0009_migration_v2_import_staging.sql`

### 2.2 0001_extensions_types.sql

```sql
create extension if not exists pgcrypto;
create extension if not exists citext;
create extension if not exists pg_cron;
create extension if not exists pg_net;
create extension if not exists vault with schema vault;

create schema if not exists app;

create type app.actor_role as enum ('store_manager', 'approver', 'admin', 'service');
create type app.post_status as enum ('draft', 'pending_approval', 'scheduled', 'publishing', 'published', 'failed', 'cancelled', 'deleted');
create type app.approval_status as enum ('pending', 'processing', 'approved', 'rejected', 'expired', 'cancelled');
create type app.job_state as enum ('pending', 'processing', 'done', 'failed', 'cancelled');
create type app.job_kind as enum ('publish_post', 'reconcile_ig', 'refresh_token', 'send_notification', 'erasure_revoke', 'erasure_physical_delete');
create type app.external_op_kind as enum ('ig_create_container', 'ig_publish', 'ig_revoke', 'ig_debug_token', 'r2_put_object', 'anthropic_generate');
create type app.external_op_state as enum ('reserved', 'in_progress', 'succeeded', 'failed', 'unknown', 'manual_remediation_required');
create type app.token_type as enum ('user_access_token', 'long_lived_user_token', 'page_access_token', 'system_user_token');
create type app.revoke_status as enum ('active', 'revoke_pending', 'revoked_verified', 'revoke_failed', 'manual_remediation_required');
create type app.media_preflight_status as enum ('pending', 'clean', 'blocked', 'needs_manual', 'deleted');
create type app.erasure_status as enum ('requested', 'soft_deleted', 'revoke_pending', 'revoke_verified', 'manual_pending', 'pitr_wait', 'physically_deleted', 'certificate_issued', 'failed');
```

### 2.3 0002_core_tables.sql

```sql
create table app.profiles (
  id uuid primary key references auth.users(id) on delete cascade,
  email citext not null unique,
  display_name text,
  global_role app.actor_role not null default 'store_manager',
  mfa_enrolled boolean not null default false,
  disabled_at timestamptz,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create table app.stores (
  id uuid primary key default gen_random_uuid(),
  slug text not null unique,
  name text not null,
  legal_name text,
  address text,
  phone text,
  public_email citext,
  plan text not null default 'standard' check (plan in ('standard', 'premium')),
  timezone text not null default 'Asia/Tokyo',
  approval_required boolean not null default true,
  brand_voice jsonb not null default '{}'::jsonb,
  hashtag_config jsonb not null default '{}'::jsonb,
  caption_config jsonb not null default '{}'::jsonb,
  prohibited jsonb not null default '[]'::jsonb,
  sns_method text,
  method_version text not null default 'method-v1',
  mock_mode boolean not null default false,
  v4_enabled boolean not null default false,
  v2_store_id text unique,
  deleted_at timestamptz,
  deletion_requested_at timestamptz,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create table app.store_memberships (
  store_id uuid not null references app.stores(id) on delete cascade,
  user_id uuid not null references app.profiles(id) on delete cascade,
  role app.actor_role not null,
  approver_email citext,
  active boolean not null default true,
  created_at timestamptz not null default now(),
  primary key (store_id, user_id),
  check (role in ('store_manager', 'approver', 'admin'))
);

create table app.instagram_tokens (
  id uuid primary key default gen_random_uuid(),
  store_id uuid not null references app.stores(id) on delete cascade,
  token_type app.token_type not null,
  vault_secret_id uuid not null,
  fb_user_id text,
  page_id text,
  ig_user_id text,
  scopes text[] not null default '{}',
  expires_at timestamptz,
  status app.revoke_status not null default 'active',
  debug_token_last_result jsonb,
  revoked_at timestamptz,
  revoke_verified_at timestamptz,
  manual_remediation_note text,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now(),
  unique (store_id, token_type, coalesce(page_id, ''), coalesce(ig_user_id, ''))
);

create table app.media_assets (
  id uuid primary key default gen_random_uuid(),
  store_id uuid not null references app.stores(id) on delete cascade,
  uploader_id uuid references app.profiles(id),
  r2_bucket text,
  r2_key text unique,
  original_filename text,
  mime_type text not null,
  byte_size bigint not null check (byte_size > 0 and byte_size <= 12582912),
  width int check (width > 0 and width <= 4096),
  height int check (height > 0 and height <= 4096),
  sha256 text not null,
  exif_stripped boolean not null default false,
  gps_present boolean not null default false,
  ocr_text_hash text,
  text_area_ratio numeric(5,4) not null default 0,
  signage_confidence numeric(5,4) not null default 0,
  menu_confidence numeric(5,4) not null default 0,
  face_confidence numeric(5,4) not null default 0,
  preflight_status app.media_preflight_status not null default 'pending',
  block_reasons text[] not null default '{}',
  deleted_at timestamptz,
  created_at timestamptz not null default now()
);

create table app.posts (
  id uuid primary key default gen_random_uuid(),
  store_id uuid not null references app.stores(id) on delete cascade,
  created_by uuid references app.profiles(id),
  status app.post_status not null default 'draft',
  caption_text text not null default '',
  caption_sha256 text generated always as (encode(digest(caption_text, 'sha256'), 'hex')) stored,
  hashtags text[] not null default '{}',
  dish_names text[] not null default '{}',
  method_version text not null default 'method-v1',
  scheduled_at timestamptz,
  published_at timestamptz,
  ig_media_id text,
  ig_permalink text,
  idempotency_key text not null unique,
  failure_reason text,
  deleted_at timestamptz,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now(),
  check ((status <> 'scheduled') or scheduled_at is not null)
);

create table app.post_images (
  post_id uuid not null references app.posts(id) on delete cascade,
  media_asset_id uuid not null references app.media_assets(id),
  ordinal int not null check (ordinal >= 0 and ordinal < 10),
  primary key (post_id, media_asset_id),
  unique (post_id, ordinal)
);

create table app.approval_requests (
  id uuid primary key default gen_random_uuid(),
  post_id uuid not null references app.posts(id) on delete cascade,
  store_id uuid not null references app.stores(id) on delete cascade,
  approver_user_id uuid references app.profiles(id),
  approver_email citext,
  token_hash text not null unique,
  csrf_token_hash text,
  status app.approval_status not null default 'pending',
  comment text,
  responded_at timestamptz,
  expires_at timestamptz not null,
  created_at timestamptz not null default now(),
  check (expires_at > created_at)
);

create table app.job_queue (
  id uuid primary key default gen_random_uuid(),
  store_id uuid references app.stores(id) on delete cascade,
  kind app.job_kind not null,
  payload jsonb not null,
  idempotency_key text not null unique,
  state app.job_state not null default 'pending',
  priority int not null default 0,
  scheduled_at timestamptz not null default now(),
  next_attempt_at timestamptz,
  attempts int not null default 0 check (attempts >= 0),
  max_attempts int not null default 3 check (max_attempts > 0),
  locked_by text,
  locked_until timestamptz,
  dispatch_request_id bigint,
  last_error text,
  finalized_at timestamptz,
  created_at timestamptz not null default now()
);

create table app.external_operations (
  id uuid primary key default gen_random_uuid(),
  store_id uuid references app.stores(id) on delete cascade,
  post_id uuid references app.posts(id) on delete set null,
  kind app.external_op_kind not null,
  idempotency_key text not null,
  state app.external_op_state not null default 'reserved',
  locked_by text,
  locked_until timestamptz,
  request_fingerprint text,
  request_json jsonb,
  response_json jsonb,
  external_id text,
  container_id text,
  media_id text,
  final_status text,
  error text,
  retryable boolean not null default true,
  created_at timestamptz not null default now(),
  updated_at timestamptz not null default now(),
  unique (kind, idempotency_key)
);

create table app.audit_chains (
  chain_key text primary key,
  last_seq bigint not null default 0,
  last_hash bytea,
  updated_at timestamptz not null default now()
);

create table app.audit_logs (
  id uuid primary key default gen_random_uuid(),
  chain_key text not null,
  seq bigint not null,
  ts timestamptz not null default clock_timestamp(),
  store_id uuid,
  actor_role app.actor_role not null,
  actor_id text,
  action text not null,
  target_type text,
  target_id text,
  outcome text not null check (outcome in ('success', 'failure', 'retry', 'skipped', 'blocked')),
  ip inet,
  ip_redacted text,
  user_agent text,
  request_id text,
  cf_ray text,
  error text,
  media_id text,
  metadata jsonb not null default '{}'::jsonb,
  prev_hash bytea,
  current_hash bytea not null,
  unique (chain_key, seq)
);

create table app.rate_limit_buckets (
  bucket_key text primary key,
  count int not null default 0,
  limit_count int not null,
  reset_at timestamptz not null,
  first_seen_at timestamptz not null default now(),
  updated_at timestamptz not null default now()
);

create table app.login_attempts (
  id uuid primary key default gen_random_uuid(),
  email citext,
  store_slug text,
  ip_redacted text,
  success boolean not null,
  failure_reason text,
  created_at timestamptz not null default now()
);

create table app.erasure_requests (
  id uuid primary key default gen_random_uuid(),
  store_id uuid not null references app.stores(id),
  requested_by uuid references app.profiles(id),
  status app.erasure_status not null default 'requested',
  revoke_status app.revoke_status not null default 'revoke_pending',
  requested_at timestamptz not null default now(),
  soft_deleted_at timestamptz,
  revoke_verified_at timestamptz,
  pitr_wait_until timestamptz,
  physically_deleted_at timestamptz,
  certificate_id uuid,
  blocking_reason text,
  evidence jsonb not null default '{}'::jsonb
);

create table app.deletion_certificates (
  id uuid primary key default gen_random_uuid(),
  erasure_request_id uuid not null unique references app.erasure_requests(id),
  store_hash text not null,
  issued_at timestamptz not null default now(),
  body_json jsonb not null,
  body_sha256 text not null
);

create table app.feature_flags (
  key text primary key,
  enabled boolean not null default false,
  json_value jsonb not null default '{}'::jsonb,
  updated_at timestamptz not null default now()
);
```

### 2.4 Indexes

```sql
create index stores_active_slug_idx on app.stores (slug) where deleted_at is null;

create index memberships_user_store_idx on app.store_memberships (user_id, store_id) where active;
create index memberships_store_role_idx on app.store_memberships (store_id, role) where active;

create index posts_store_status_scheduled_idx
  on app.posts (store_id, status, scheduled_at, id)
  where deleted_at is null;

create index posts_publish_lookup_idx
  on app.posts (status, scheduled_at, id)
  where status in ('scheduled', 'publishing') and deleted_at is null;

create index media_assets_store_status_idx
  on app.media_assets (store_id, preflight_status, created_at desc)
  where deleted_at is null;

create index approvals_token_hash_idx on app.approval_requests (token_hash);
create index approvals_store_status_expires_idx
  on app.approval_requests (store_id, status, expires_at);

create index job_queue_ready_idx
  on app.job_queue (state, scheduled_at, priority desc, id)
  where state in ('pending', 'processing');

create index job_queue_locked_until_idx
  on app.job_queue (locked_until, id)
  where state = 'processing';

create index external_ops_kind_key_idx on app.external_operations (kind, idempotency_key);
create index external_ops_post_kind_idx on app.external_operations (post_id, kind, created_at desc);
create index external_ops_stale_idx on app.external_operations (state, locked_until)
  where state in ('reserved', 'in_progress', 'unknown');

create index audit_logs_store_seq_idx on app.audit_logs (store_id, seq);
create index audit_logs_target_idx on app.audit_logs (target_type, target_id, seq);
create index audit_logs_ts_idx on app.audit_logs (ts desc);

create index login_attempts_email_time_idx on app.login_attempts (email, created_at desc);
create index erasure_status_idx on app.erasure_requests (status, pitr_wait_until);
```

### 2.5 RLS

```sql
alter table app.profiles enable row level security;
alter table app.stores enable row level security;
alter table app.store_memberships enable row level security;
alter table app.instagram_tokens enable row level security;
alter table app.media_assets enable row level security;
alter table app.posts enable row level security;
alter table app.post_images enable row level security;
alter table app.approval_requests enable row level security;
alter table app.audit_logs enable row level security;

create or replace function app.current_role()
returns app.actor_role language sql stable as $$
  select coalesce((auth.jwt() ->> 'role')::app.actor_role, 'store_manager'::app.actor_role)
$$;

create or replace function app.is_admin()
returns boolean language sql stable as $$
  select exists (
    select 1 from app.profiles p
    where p.id = auth.uid()
      and p.global_role = 'admin'
      and p.disabled_at is null
  )
$$;

create or replace function app.is_store_member(p_store_id uuid, p_roles app.actor_role[])
returns boolean language sql stable security definer set search_path = app, public as $$
  select app.is_admin() or exists (
    select 1 from app.store_memberships m
    join app.profiles p on p.id = m.user_id
    where m.store_id = p_store_id
      and m.user_id = auth.uid()
      and m.active
      and p.disabled_at is null
      and m.role = any(p_roles)
  )
$$;

create policy profiles_self_select on app.profiles
  for select using (id = auth.uid() or app.is_admin());

create policy stores_member_select on app.stores
  for select using (deleted_at is null and app.is_store_member(id, array['store_manager','approver','admin']::app.actor_role[]));

create policy stores_manager_update on app.stores
  for update using (app.is_store_member(id, array['store_manager','admin']::app.actor_role[]))
  with check (app.is_store_member(id, array['store_manager','admin']::app.actor_role[]));

create policy memberships_member_select on app.store_memberships
  for select using (app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]));

create policy posts_member_select on app.posts
  for select using (deleted_at is null and app.is_store_member(store_id, array['store_manager','approver','admin']::app.actor_role[]));

create policy posts_manager_write on app.posts
  for all using (app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]))
  with check (app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]));

create policy media_member_select on app.media_assets
  for select using (deleted_at is null and app.is_store_member(store_id, array['store_manager','approver','admin']::app.actor_role[]));

create policy media_manager_write on app.media_assets
  for all using (app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]))
  with check (app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]));

create policy approvals_member_select on app.approval_requests
  for select using (app.is_store_member(store_id, array['store_manager','approver','admin']::app.actor_role[]));

create policy audit_admin_or_manager_select on app.audit_logs
  for select using (store_id is null or app.is_store_member(store_id, array['store_manager','admin']::app.actor_role[]));

create policy no_client_tokens on app.instagram_tokens
  for all using (false) with check (false);
```

Token, job, external operation, erasureはclient direct access禁止。Edge Functionsはservice roleでRPC経由のみ触る。

### 2.6 Supabase Vault Usage

Vault secret naming:

- `ig_token:{store_id}:{token_type}:{token_id}`
- `cron_project_url`
- `cron_service_token`
- `meta_app_secret`
- `anthropic_api_key`
- `redaction_hmac_pepper_v1`

Token insert example:

```sql
select vault.create_secret(
  'EAAB...',
  'ig_token:STORE_UUID:user_access_token:TOKEN_UUID',
  'Instagram user token for store'
);
```

`instagram_tokens.vault_secret_id`には`vault.secrets.id`を保存する。Edge Functionはservice roleで以下を読む。

```sql
select decrypted_secret
from vault.decrypted_secrets
where id = $1;
```

Vault alpha/SLA riskは運用risk registerに残す。PITR復元で削除済みsecretが復活し得るため、erasureはVault row deletionではなくMeta revoke + tombstone + PITR tailで実装する。

---

## 3. PL/pgSQL Functions

### 3.1 claim_jobs(worker_id, lease_seconds, max_jobs)

```sql
create or replace function app.claim_jobs(
  p_worker_id text,
  p_lease_seconds int default 300,
  p_max_jobs int default 5
)
returns setof app.job_queue
language plpgsql
security definer
set search_path = app, public
as $$
begin
  return query
  with picked as (
    select id
    from app.job_queue
    where (
        state = 'pending'
        or (state = 'processing' and locked_until < now())
      )
      and scheduled_at <= now()
      and coalesce(next_attempt_at, scheduled_at) <= now()
      and attempts < max_attempts
    order by priority desc, scheduled_at asc, id asc
    limit greatest(1, least(p_max_jobs, 50))
    for update skip locked
  )
  update app.job_queue j
  set state = 'processing',
      locked_by = p_worker_id,
      locked_until = now() + make_interval(secs => greatest(30, p_lease_seconds)),
      attempts = attempts + 1,
      last_error = null
  from picked
  where j.id = picked.id
  returning j.*;
end;
$$;
```

### 3.2 extend_job_lease(job_id, worker_id, lease_seconds)

```sql
create or replace function app.extend_job_lease(
  p_job_id uuid,
  p_worker_id text,
  p_lease_seconds int default 300
)
returns boolean
language plpgsql
security definer
set search_path = app, public
as $$
declare v_changed int;
begin
  update app.job_queue
  set locked_until = now() + make_interval(secs => greatest(30, p_lease_seconds))
  where id = p_job_id
    and locked_by = p_worker_id
    and state = 'processing'
    and locked_until > now();

  get diagnostics v_changed = row_count;
  return v_changed = 1;
end;
$$;
```

### 3.3 finalize_job(job_id, worker_id, success, error)

```sql
create or replace function app.finalize_job(
  p_job_id uuid,
  p_worker_id text,
  p_success boolean,
  p_error text default null,
  p_retry_after_seconds int default null
)
returns boolean
language plpgsql
security definer
set search_path = app, public
as $$
declare
  v_changed int;
  v_retry_seconds int;
begin
  if p_success then
    update app.job_queue
    set state = 'done',
        finalized_at = now(),
        locked_by = null,
        locked_until = null,
        last_error = null
    where id = p_job_id
      and locked_by = p_worker_id
      and state = 'processing'
      and locked_until > now();
  else
    select coalesce(p_retry_after_seconds, least(3600, 60 * power(2, attempts)))::int
      into v_retry_seconds
    from app.job_queue
    where id = p_job_id;

    update app.job_queue
    set state = case when attempts >= max_attempts then 'failed'::app.job_state else 'pending'::app.job_state end,
        finalized_at = case when attempts >= max_attempts then now() else null end,
        next_attempt_at = case when attempts >= max_attempts then null else now() + make_interval(secs => v_retry_seconds) end,
        locked_by = null,
        locked_until = null,
        last_error = left(coalesce(p_error, 'unknown'), 4000)
    where id = p_job_id
      and locked_by = p_worker_id
      and state = 'processing'
      and locked_until > now();
  end if;

  get diagnostics v_changed = row_count;
  return v_changed = 1;
end;
$$;
```

### 3.4 reserve_external_op(idempotency_key, kind)

```sql
create type app.reserve_external_op_result as (
  op_id uuid,
  state app.external_op_state,
  should_execute boolean,
  external_id text,
  container_id text,
  media_id text,
  response_json jsonb
);

create or replace function app.reserve_external_op(
  p_idempotency_key text,
  p_kind app.external_op_kind,
  p_store_id uuid default null,
  p_post_id uuid default null,
  p_worker_id text default null,
  p_request_fingerprint text default null,
  p_request_json jsonb default '{}'::jsonb,
  p_lease_seconds int default 300
)
returns app.reserve_external_op_result
language plpgsql
security definer
set search_path = app, public
as $$
declare
  v app.external_operations;
  r app.reserve_external_op_result;
begin
  insert into app.external_operations (
    kind, idempotency_key, store_id, post_id, state, locked_by, locked_until,
    request_fingerprint, request_json
  )
  values (
    p_kind, p_idempotency_key, p_store_id, p_post_id, 'reserved',
    p_worker_id, now() + make_interval(secs => p_lease_seconds),
    p_request_fingerprint, p_request_json
  )
  on conflict (kind, idempotency_key) do nothing;

  select * into v
  from app.external_operations
  where kind = p_kind and idempotency_key = p_idempotency_key
  for update;

  if v.state = 'succeeded' then
    r := (v.id, v.state, false, v.external_id, v.container_id, v.media_id, v.response_json);
    return r;
  end if;

  if v.state in ('reserved', 'in_progress', 'unknown') and (v.locked_until is null or v.locked_until < now() or v.locked_by = p_worker_id) then
    update app.external_operations
    set state = 'in_progress',
        locked_by = p_worker_id,
        locked_until = now() + make_interval(secs => p_lease_seconds),
        updated_at = now()
    where id = v.id
    returning * into v;

    r := (v.id, v.state, true, v.external_id, v.container_id, v.media_id, v.response_json);
    return r;
  end if;

  r := (v.id, v.state, false, v.external_id, v.container_id, v.media_id, v.response_json);
  return r;
end;
$$;
```

Finalize external op:

```sql
create or replace function app.finalize_external_op(
  p_op_id uuid,
  p_worker_id text,
  p_state app.external_op_state,
  p_external_id text default null,
  p_container_id text default null,
  p_media_id text default null,
  p_response_json jsonb default '{}'::jsonb,
  p_error text default null,
  p_retryable boolean default true
)
returns boolean
language plpgsql
security definer
set search_path = app, public
as $$
declare v_changed int;
begin
  update app.external_operations
  set state = p_state,
      external_id = coalesce(p_external_id, external_id),
      container_id = coalesce(p_container_id, container_id),
      media_id = coalesce(p_media_id, media_id),
      response_json = coalesce(p_response_json, response_json),
      error = p_error,
      retryable = p_retryable,
      locked_by = null,
      locked_until = null,
      updated_at = now()
  where id = p_op_id
    and locked_by = p_worker_id;

  get diagnostics v_changed = row_count;
  return v_changed = 1;
end;
$$;
```

### 3.5 audit_log_append

```sql
create or replace function app.audit_log_append(
  p_store_id uuid,
  p_actor_role app.actor_role,
  p_actor_id text,
  p_action text,
  p_target_type text,
  p_target_id text,
  p_outcome text,
  p_ip inet default null,
  p_ip_redacted text default null,
  p_user_agent text default null,
  p_request_id text default null,
  p_cf_ray text default null,
  p_error text default null,
  p_media_id text default null,
  p_metadata jsonb default '{}'::jsonb
)
returns uuid
language plpgsql
security definer
set search_path = app, public
as $$
declare
  v_chain_key text := coalesce(p_store_id::text, 'global');
  v_seq bigint;
  v_prev bytea;
  v_current bytea;
  v_id uuid := gen_random_uuid();
  v_canonical jsonb;
begin
  insert into app.audit_chains(chain_key) values (v_chain_key)
  on conflict (chain_key) do nothing;

  select last_seq, last_hash
    into v_seq, v_prev
  from app.audit_chains
  where chain_key = v_chain_key
  for update;

  v_seq := v_seq + 1;

  v_canonical := jsonb_build_object(
    'id', v_id,
    'chain_key', v_chain_key,
    'seq', v_seq,
    'store_id', p_store_id,
    'actor_role', p_actor_role,
    'actor_id', p_actor_id,
    'action', p_action,
    'target_type', p_target_type,
    'target_id', p_target_id,
    'outcome', p_outcome,
    'ip_redacted', p_ip_redacted,
    'request_id', p_request_id,
    'cf_ray', p_cf_ray,
    'error', p_error,
    'media_id', p_media_id,
    'metadata', coalesce(p_metadata, '{}'::jsonb)
  );

  v_current := digest(coalesce(v_prev, ''::bytea) || convert_to(v_canonical::text, 'utf8'), 'sha256');

  insert into app.audit_logs (
    id, chain_key, seq, store_id, actor_role, actor_id, action, target_type, target_id,
    outcome, ip, ip_redacted, user_agent, request_id, cf_ray, error, media_id,
    metadata, prev_hash, current_hash
  )
  values (
    v_id, v_chain_key, v_seq, p_store_id, p_actor_role, p_actor_id, p_action, p_target_type, p_target_id,
    p_outcome, p_ip, p_ip_redacted, p_user_agent, p_request_id, p_cf_ray, p_error, p_media_id,
    coalesce(p_metadata, '{}'::jsonb), v_prev, v_current
  );

  update app.audit_chains
  set last_seq = v_seq, last_hash = v_current, updated_at = now()
  where chain_key = v_chain_key;

  return v_id;
end;
$$;
```

### 3.6 erasure_soft_delete(store_id)

```sql
create or replace function app.erasure_soft_delete(
  p_store_id uuid,
  p_requested_by uuid default null
)
returns uuid
language plpgsql
security definer
set search_path = app, public
as $$
declare
  v_request_id uuid;
begin
  update app.stores
  set deleted_at = now(),
      deletion_requested_at = now(),
      updated_at = now()
  where id = p_store_id
    and deleted_at is null;

  insert into app.erasure_requests (
    store_id, requested_by, status, revoke_status, soft_deleted_at, pitr_wait_until
  )
  values (
    p_store_id, p_requested_by, 'soft_deleted', 'revoke_pending', now(), now() + interval '8 days'
  )
  returning id into v_request_id;

  update app.instagram_tokens
  set status = 'revoke_pending', updated_at = now()
  where store_id = p_store_id
    and status = 'active';

  perform app.audit_log_append(
    p_store_id, 'admin', coalesce(p_requested_by::text, 'system'),
    'erasure_soft_delete', 'store', p_store_id::text, 'success',
    null, null, null, null, null, null, null,
    jsonb_build_object('erasure_request_id', v_request_id)
  );

  return v_request_id;
end;
$$;
```

### 3.7 erasure_physical_delete_batch

```sql
create or replace function app.erasure_physical_delete_batch(p_limit int default 20)
returns table(erasure_request_id uuid, store_id uuid, deleted boolean)
language plpgsql
security definer
set search_path = app, public
as $$
declare r record;
declare v_store_hash text;
begin
  for r in
    select er.id, er.store_id
    from app.erasure_requests er
    where er.status in ('pitr_wait', 'revoke_verified')
      and er.pitr_wait_until <= now()
    order by er.pitr_wait_until asc
    limit p_limit
    for update skip locked
  loop
    v_store_hash := encode(digest(r.store_id::text, 'sha256'), 'hex');

    delete from app.post_images where post_id in (select id from app.posts where store_id = r.store_id);
    delete from app.approval_requests where store_id = r.store_id;
    delete from app.media_assets where store_id = r.store_id;
    delete from app.external_operations where store_id = r.store_id;
    delete from app.job_queue where store_id = r.store_id;
    delete from app.instagram_tokens where store_id = r.store_id;
    delete from app.posts where store_id = r.store_id;
    delete from app.store_memberships where store_id = r.store_id;

    update app.audit_logs
    set store_id = null,
        actor_id = null,
        ip = null,
        user_agent = null,
        metadata = jsonb_build_object('erased_store_hash', v_store_hash)
    where store_id = r.store_id;

    delete from app.stores where id = r.store_id;

    update app.erasure_requests
    set status = 'physically_deleted',
        physically_deleted_at = now(),
        evidence = evidence || jsonb_build_object('store_hash', v_store_hash)
    where id = r.id;

    erasure_request_id := r.id;
    store_id := r.store_id;
    deleted := true;
    return next;
  end loop;
end;
$$;
```

---

## 4. Edge Functions / CF Workers Code

### 4.1 Publish Flow

Rule: job leaseは「誰が処理中か」を守るだけ。IG二重投稿防止は`external_operations`が守る。

Publish steps:

1. `job-dispatcher` receives `{ job_id, worker_id }`.
2. Validate job row: `state='processing'`, `locked_by=worker_id`, `locked_until > now()`.
3. Load post, clean media assets, active token from Vault.
4. `reserve_external_op(post.id + ':create_container:v1', 'ig_create_container')`.
5. If already succeeded, reuse `container_id`.
6. Otherwise call `POST /{ig-user-id}/media`, store `container_id`.
7. Poll container, heartbeat job every 30s.
8. `reserve_external_op(post.id + ':media_publish:v1', 'ig_publish')`.
9. If already succeeded, reuse `media_id`.
10. Otherwise call `POST /{ig-user-id}/media_publish`.
11. Immediately finalize external op with `media_id`.
12. Update post `published`.
13. `finalize_job(success=true)`.
14. If crash occurs, retry consults ledger before IG API.

### 4.2 Graph API Wrapper

```ts
// supabase/functions/_shared/meta.ts
export type GraphMethod = "GET" | "POST" | "DELETE"
const GRAPH = "https://graph.facebook.com/v21.0"

export class GraphApiError extends Error {
  constructor(public status: number, public body: unknown) {
    super(`Graph API ${status}: ${JSON.stringify(body)}`)
  }
}

export async function graphApi<T>(
  path: string,
  params: Record<string, string>,
  method: GraphMethod = "POST",
): Promise<T> {
  const url = new URL(`${GRAPH}${path}`)
  const init: RequestInit = { method }

  if (method === "GET" || method === "DELETE") {
    for (const [k, v] of Object.entries(params)) url.searchParams.set(k, v)
  } else {
    init.headers = { "Content-Type": "application/x-www-form-urlencoded" }
    init.body = new URLSearchParams(params)
  }

  const res = await fetch(url, init)
  const data = await res.json().catch(() => ({}))
  if (!res.ok || data.error) throw new GraphApiError(res.status, data)
  return data as T
}

export async function debugToken(inputToken: string, appId: string, appSecret: string) {
  return await graphApi<{
    data: {
      app_id: string
      type?: string
      application?: string
      expires_at?: number
      is_valid: boolean
      issued_at?: number
      scopes?: string[]
      user_id?: string
    }
  }>("/debug_token", {
    input_token: inputToken,
    access_token: `${appId}|${appSecret}`,
  }, "GET")
}
```

### 4.3 IG Publish Code

```ts
// supabase/functions/_shared/instagram.ts
import { graphApi } from "./meta.ts"

async function waitForContainer(
  containerId: string,
  accessToken: string,
  heartbeat: () => Promise<void>,
  maxWaitSec = 90,
): Promise<void> {
  const started = Date.now()
  while ((Date.now() - started) / 1000 < maxWaitSec) {
    await heartbeat()
    const data = await graphApi<{ status_code: string; status?: string }>(
      `/${containerId}`,
      { fields: "status_code,status", access_token: accessToken },
      "GET",
    )
    if (data.status_code === "FINISHED") return
    if (data.status_code === "ERROR") throw new Error(`IG container error: ${data.status ?? "unknown"}`)
    await new Promise((r) => setTimeout(r, 2000))
  }
  throw new Error(`IG container timeout: ${containerId}`)
}

export async function createSingleImageContainer(args: {
  igUserId: string
  accessToken: string
  imageUrl: string
  caption: string
  idempotencyKey: string
}) {
  return await graphApi<{ id: string }>(
    `/${args.igUserId}/media`,
    {
      image_url: args.imageUrl,
      caption: args.caption,
      access_token: args.accessToken,
      idempotency_key: args.idempotencyKey,
    },
    "POST",
  )
}

export async function publishContainer(args: {
  igUserId: string
  accessToken: string
  containerId: string
  idempotencyKey: string
}) {
  return await graphApi<{ id: string }>(
    `/${args.igUserId}/media_publish`,
    {
      creation_id: args.containerId,
      access_token: args.accessToken,
      idempotency_key: args.idempotencyKey,
    },
    "POST",
  )
}

export async function publishSingleImage(args: {
  igUserId: string
  accessToken: string
  imageUrl: string
  caption: string
  createKey: string
  publishKey: string
  heartbeat: () => Promise<void>
}) {
  const container = await createSingleImageContainer({
    igUserId: args.igUserId,
    accessToken: args.accessToken,
    imageUrl: args.imageUrl,
    caption: args.caption,
    idempotencyKey: args.createKey,
  })
  await waitForContainer(container.id, args.accessToken, args.heartbeat)
  const published = await publishContainer({
    igUserId: args.igUserId,
    accessToken: args.accessToken,
    containerId: container.id,
    idempotencyKey: args.publishKey,
  })
  return { containerId: container.id, mediaId: published.id }
}
```

Trade-off: Instagram Content Publishing API側が`idempotency_key`を公式に副作用抑止すると仮定しない。`idempotency_key`はtraceとして送るが、二重投稿防止の真実は`external_operations` ledgerである。E2でMetaがunknown paramsを拒否する場合は、Graph wrapperから`idempotency_key`送信をfeature flagで無効化し、ledgerのみを残す。

### 4.4 Job Dispatcher Edge Function

```ts
// supabase/functions/job-dispatcher/index.ts
import { createClient } from "npm:@supabase/supabase-js@2"
import { publishSingleImage } from "../_shared/instagram.ts"

const SUPABASE_URL = Deno.env.get("SUPABASE_URL")!
const SERVICE_KEY = Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!

Deno.serve(async (req) => {
  if (Deno.env.get("SB_REGION") !== "ap-northeast-1") {
    return Response.json({ error: "wrong_region", region: Deno.env.get("SB_REGION") }, { status: 409 })
  }

  const body = await req.json()
  const { job_id, worker_id } = body as { job_id: string; worker_id: string }
  const supabase = createClient(SUPABASE_URL, SERVICE_KEY, { auth: { persistSession: false } })

  EdgeRuntime.waitUntil(processJob(supabase, job_id, worker_id))
  return Response.json({ accepted: true, job_id })
})

async function processJob(supabase: ReturnType<typeof createClient>, jobId: string, workerId: string) {
  try {
    const { data: job, error } = await supabase
      .from("job_queue")
      .select("*")
      .eq("id", jobId)
      .eq("locked_by", workerId)
      .eq("state", "processing")
      .single()
    if (error || !job) throw new Error(`job_not_found:${error?.message}`)

    if (job.kind === "publish_post") {
      await processPublishJob(supabase, job, workerId)
      await supabase.rpc("finalize_job", { p_job_id: jobId, p_worker_id: workerId, p_success: true })
      return
    }

    throw new Error(`unsupported_job_kind:${job.kind}`)
  } catch (e) {
    await supabase.rpc("finalize_job", {
      p_job_id: jobId,
      p_worker_id: workerId,
      p_success: false,
      p_error: e instanceof Error ? e.message : String(e),
    })
  }
}

async function processPublishJob(supabase: ReturnType<typeof createClient>, job: any, workerId: string) {
  const postId = job.payload.post_id
  const { data: post } = await supabase
    .from("posts")
    .select("*, media_assets:post_images(media_assets(*))")
    .eq("id", postId)
    .single()
  if (!post) throw new Error("post_missing")

  const cleanImages = post.media_assets
    .map((x: any) => x.media_assets)
    .filter((m: any) => m.preflight_status === "clean" && m.r2_key)
  if (cleanImages.length === 0) throw new Error("no_clean_images")

  const { data: tokenRow } = await supabase
    .from("instagram_tokens")
    .select("*")
    .eq("store_id", post.store_id)
    .in("token_type", ["user_access_token", "long_lived_user_token", "page_access_token"])
    .eq("status", "active")
    .limit(1)
    .single()
  if (!tokenRow) throw new Error("ig_token_missing")

  const { data: secretRows } = await supabase
    .schema("vault")
    .from("decrypted_secrets")
    .select("decrypted_secret")
    .eq("id", tokenRow.vault_secret_id)
  const accessToken = secretRows?.[0]?.decrypted_secret
  if (!accessToken) throw new Error("ig_token_secret_missing")

  const heartbeat = async () => {
    const { data } = await supabase.rpc("extend_job_lease", {
      p_job_id: job.id,
      p_worker_id: workerId,
      p_lease_seconds: 300,
    })
    if (!data) throw new Error("job_lease_lost")
  }

  const createKey = `${post.id}:ig_create_container:v1`
  const { data: createReserve } = await supabase.rpc("reserve_external_op", {
    p_idempotency_key: createKey,
    p_kind: "ig_create_container",
    p_store_id: post.store_id,
    p_post_id: post.id,
    p_worker_id: workerId,
    p_request_json: { image_count: cleanImages.length, caption_sha256: post.caption_sha256 },
  })

  let containerId = createReserve.container_id
  let mediaId: string | undefined

  if (createReserve.should_execute) {
    const result = await publishSingleImage({
      igUserId: tokenRow.ig_user_id,
      accessToken,
      imageUrl: cleanImages[0].public_url,
      caption: post.caption_text,
      createKey,
      publishKey: `${post.id}:ig_publish:v1`,
      heartbeat,
    })
    containerId = result.containerId
    mediaId = result.mediaId

    await supabase.rpc("finalize_external_op", {
      p_op_id: createReserve.op_id,
      p_worker_id: workerId,
      p_state: "succeeded",
      p_container_id: containerId,
      p_response_json: { container_id: containerId },
    })
  }

  const publishKey = `${post.id}:ig_publish:v1`
  const { data: publishReserve } = await supabase.rpc("reserve_external_op", {
    p_idempotency_key: publishKey,
    p_kind: "ig_publish",
    p_store_id: post.store_id,
    p_post_id: post.id,
    p_worker_id: workerId,
    p_request_json: { container_id: containerId, caption_sha256: post.caption_sha256 },
  })

  if (!mediaId && publishReserve.media_id) mediaId = publishReserve.media_id

  if (publishReserve.should_execute && !mediaId) {
    const published = await import("../_shared/instagram.ts").then((m) =>
      m.publishContainer({
        igUserId: tokenRow.ig_user_id,
        accessToken,
        containerId,
        idempotencyKey: publishKey,
      })
    )
    mediaId = published.id
    await supabase.rpc("finalize_external_op", {
      p_op_id: publishReserve.op_id,
      p_worker_id: workerId,
      p_state: "succeeded",
      p_media_id: mediaId,
      p_response_json: { media_id: mediaId },
    })
  }

  if (!mediaId) throw new Error("publish_media_id_missing")

  await supabase
    .from("posts")
    .update({ status: "published", ig_media_id: mediaId, published_at: new Date().toISOString() })
    .eq("id", post.id)
}
```

### 4.5 Reconciliation for Orphan IG Media

Crash window: `media_publish` returned but ledger/post update failed.

Reconciliation job:

1. Select `external_operations` where `kind='ig_publish'` and `state in ('in_progress','unknown')` and `locked_until < now()`.
2. Query latest IG media: `GET /{ig-user-id}/media?fields=id,caption,permalink,timestamp&limit=50`.
3. Match by exact caption hash when possible:
   - store `caption_sha256`
   - compare exact caption if available from IG
   - timestamp within `op.created_at ± 10min`
4. If one match: mark op succeeded, update post.
5. If multiple matches: set `manual_remediation_required`.
6. If no match: retry publish only if no `media_publish` HTTP response was ever observed. If response was observed but lost, do not retry; mark `unknown`.

```ts
export async function reconcileIgPublish(supabase: any, op: any, accessToken: string, igUserId: string) {
  const media = await graphApi<{ data: Array<{ id: string; caption?: string; permalink?: string; timestamp?: string }> }>(
    `/${igUserId}/media`,
    { fields: "id,caption,permalink,timestamp", limit: "50", access_token: accessToken },
    "GET",
  )

  const expectedCaption = op.request_json?.caption_text
  const createdAt = new Date(op.created_at).getTime()
  const matches = media.data.filter((m) => {
    const ts = m.timestamp ? new Date(m.timestamp).getTime() : 0
    return Math.abs(ts - createdAt) <= 10 * 60 * 1000 && (!expectedCaption || m.caption === expectedCaption)
  })

  if (matches.length === 1) {
    await supabase.rpc("finalize_external_op", {
      p_op_id: op.id,
      p_worker_id: "reconciler",
      p_state: "succeeded",
      p_media_id: matches[0].id,
      p_response_json: matches[0],
    })
    return { status: "reconciled", mediaId: matches[0].id }
  }

  if (matches.length > 1) {
    await supabase.rpc("finalize_external_op", {
      p_op_id: op.id,
      p_worker_id: "reconciler",
      p_state: "manual_remediation_required",
      p_error: "multiple_possible_ig_media_matches",
      p_retryable: false,
    })
    return { status: "manual" }
  }

  return { status: "not_found" }
}
```

### 4.6 Meta Revoke Identity Model

Token type rules:

- User access token: `DELETE /v21.0/me/permissions`
- Long-lived user token: same
- Page access token: `DELETE /v21.0/{page-id}/subscribed_apps` for webhook/app subscription cleanup, then `debug_token` verify. This does not prove user authorization revoked.
- System user token: API revoke保証なし。`manual_remediation_required`.

```ts
export async function revokeTokenFamily(args: {
  supabase: any
  tokenRows: any[]
  appId: string
  appSecret: string
}) {
  const results = []
  for (const tokenRow of args.tokenRows) {
    const accessToken = await loadVaultSecret(args.supabase, tokenRow.vault_secret_id)

    if (tokenRow.token_type === "user_access_token" || tokenRow.token_type === "long_lived_user_token") {
      const revoke = await graphApi<{ success?: boolean }>(
        "/me/permissions",
        { access_token: accessToken },
        "DELETE",
      )
      const debug = await debugToken(accessToken, args.appId, args.appSecret)
      const verified = debug.data.is_valid === false
      results.push({ token_id: tokenRow.id, type: tokenRow.token_type, revoke, debug, verified })
      continue
    }

    if (tokenRow.token_type === "page_access_token") {
      let cleanup = null
      if (tokenRow.page_id) {
        cleanup = await graphApi<{ success?: boolean }>(
          `/${tokenRow.page_id}/subscribed_apps`,
          { access_token: accessToken },
          "DELETE",
        )
      }
      const debug = await debugToken(accessToken, args.appId, args.appSecret)
      const verified = debug.data.is_valid === false
      results.push({
        token_id: tokenRow.id,
        type: tokenRow.token_type,
        cleanup,
        debug,
        verified,
        note: verified ? "page_token_invalid" : "page_cleanup_only_user_revoke_not_proven",
      })
      continue
    }

    results.push({
      token_id: tokenRow.id,
      type: tokenRow.token_type,
      verified: false,
      manual_remediation_required: true,
    })
  }
  return results
}

async function loadVaultSecret(supabase: any, id: string): Promise<string> {
  const { data } = await supabase.schema("vault").from("decrypted_secrets").select("decrypted_secret").eq("id", id).single()
  if (!data?.decrypted_secret) throw new Error("vault_secret_missing")
  return data.decrypted_secret
}
```

Blocking branch:

- Any `verified=false` for user token after revoke
- Page token valid and no paired user token revoked
- System user token exists
- token missing/expired and customer has not confirmed Meta Business Settings removal
- Graph API 5xx/permission error after retries

When blocked:

- Set `erasure_requests.status='manual_pending'`
- Do not issue deletion certificate
- Send customer manual guidance:
  - Meta Business Settings -> Integrations / Apps -> remove app
  - Instagram professional account settings -> connected tools
  - send screenshot or checkbox confirmation
- Append audit log.

---

## 5. pg_cron Dispatch Mechanism

### 5.1 Required Dispatch Shape

Do not schedule bare `SELECT * FROM claim_jobs()`. That marks jobs `processing` and loses handoff. Use `pg_net.http_post` per claimed job.

```sql
create or replace function app.dispatch_ready_jobs()
returns int
language plpgsql
security definer
set search_path = app, public, net, vault
as $$
declare
  v_job app.job_queue;
  v_worker_id text := 'pg_cron_' || gen_random_uuid()::text;
  v_count int := 0;
  v_request_id bigint;
  v_url text;
  v_token text;
begin
  select decrypted_secret into v_url
  from vault.decrypted_secrets where name = 'project_url';

  select decrypted_secret into v_token
  from vault.decrypted_secrets where name = 'cron_service_token';

  for v_job in select * from app.claim_jobs(v_worker_id, 300, 5)
  loop
    select net.http_post(
      url := v_url || '/functions/v1/job-dispatcher',
      headers := jsonb_build_object(
        'Content-Type', 'application/json',
        'Authorization', 'Bearer ' || v_token,
        'x-region', 'ap-northeast-1'
      ),
      body := jsonb_build_object(
        'job_id', v_job.id,
        'worker_id', v_worker_id,
        'kind', v_job.kind
      ),
      timeout_milliseconds := 2000
    ) into v_request_id;

    update app.job_queue
    set dispatch_request_id = v_request_id
    where id = v_job.id;

    v_count := v_count + 1;
  end loop;

  return v_count;
end;
$$;

select cron.schedule(
  'v4-dispatch-ready-jobs',
  '* * * * *',
  $$ select app.dispatch_ready_jobs(); $$
);
```

`job-dispatcher` must return 202 quickly and use`EdgeRuntime.waitUntil()` for processing. Hosted Edge Function max duration is Paid plan 400s、request idle 150s。Jobs expected > 60s must heartbeat every 30s and split work if projected > 300s.

### 5.2 Heartbeat Rule

- Job lease: 300s.
- Worker heartbeat: every 30s during IG polling, R2, Anthropic.
- If `extend_job_lease=false`, stop side effects immediately and throw `job_lease_lost`.
- External operation lease: also 300s. Re-reserve stale external op before retry.

### 5.3 CF Workers Watchdog

CF scheduled trigger runs every 5 minutes. It does not publish. It detects and requeues stuck jobs, alerts Slack, and invokes Supabase dispatch if pg_cron is unhealthy.

```ts
export default {
  async scheduled(_controller: ScheduledController, env: Env, ctx: ExecutionContext) {
    ctx.waitUntil(watchdog(env))
  },
}

async function watchdog(env: Env) {
  const res = await fetch(`${env.SUPABASE_URL}/rest/v1/job_queue?state=eq.processing&locked_until=lt.${new Date().toISOString()}`, {
    headers: {
      apikey: env.SUPABASE_SERVICE_ROLE_KEY,
      Authorization: `Bearer ${env.SUPABASE_SERVICE_ROLE_KEY}`,
    },
  })
  const stuck = await res.json()
  if (stuck.length > 0) {
    await notifySlack(env, `v4 watchdog: ${stuck.length} stuck processing jobs`)
  }
}
```

---

## 6. Auth / Authorization

### 6.1 Supabase Auth

- Supabase Pro tier.
- MFA enabled and required for admin and store_manager.
- Password policy: 12+ chars, breached password protection if available.
- Session:
  - access token max age: 24h
  - refresh token rotation enabled
  - inactivity timeout: 30 days
  - admin sensitive actions require recent MFA within 15 min
- JWT app metadata contains no store list. Authorization comes from DB membership via RLS.

### 6.2 Actor Model

- `store_manager`: manage store settings, create posts, upload images, schedule, request approval, view audit.
- `approver`: view assigned approvals and approve/reject. If email-link approver is used, Edge Function performs service-role mediated action after token+CSRF verification.
- `admin`: global ops, erasure, manual remediation, feature flags, migration.
- `service`: Edge Functions / pg_cron only. Never exposed to browser.

### 6.3 Approval CSRF

Approval GET:

1. token from URL is raw.
2. DB stores only `token_hash = sha256(raw_token + APPROVAL_PEPPER)`.
3. GET validates hash, status pending, expiry.
4. Generates `csrf_token`, stores `csrf_token_hash`, sets SameSite=Lax secure cookie `approval_csrf`.
5. Response includes form field csrf.

Approval POST:

- Requires raw URL token, body csrf, cookie csrf.
- Atomic update:
  - `status='pending'`
  - `expires_at > now()`
  - `csrf_token_hash = sha256(body csrf)`
  - then `status='processing'`, csrf cleared.
- All failure modes return same 404-ish response.

### 6.4 Postgres Rate Limit

```sql
create or replace function app.rate_limit_admit(
  p_bucket_key text,
  p_limit int,
  p_window_seconds int
)
returns boolean
language plpgsql
security definer
set search_path = app, public
as $$
declare v_allowed boolean;
begin
  insert into app.rate_limit_buckets(bucket_key, count, limit_count, reset_at)
  values (p_bucket_key, 1, p_limit, now() + make_interval(secs => p_window_seconds))
  on conflict (bucket_key) do update
  set count = case
        when app.rate_limit_buckets.reset_at <= now() then 1
        else app.rate_limit_buckets.count + 1
      end,
      limit_count = excluded.limit_count,
      reset_at = case
        when app.rate_limit_buckets.reset_at <= now() then now() + make_interval(secs => p_window_seconds)
        else app.rate_limit_buckets.reset_at
      end,
      updated_at = now()
  returning count <= limit_count into v_allowed;

  return coalesce(v_allowed, false);
end;
$$;
```

Login buckets:

- `login:ip:{redacted_ip}` limit 30 / 15min
- `login:email:{sha256(email)}` limit 10 / 15min
- `approval:token:{token_hash}` limit 10 / 15min
- `generate:store:{store_id}` limit by plan

---

## 7. Erasure Flow

### 7.1 Sequence

1. Admin/customer requests deletion.
2. Call `erasure_soft_delete(store_id)`.
3. Stop all new jobs for store. Cancel pending approvals. Set store `deleted_at`.
4. Edge Function `erasure-revoke` loads all `instagram_tokens`.
5. Revoke user token family.
6. Verify each token with `debug_token`.
7. Page token cleanup via `/{page-id}/subscribed_apps`.
8. If all user/long-lived tokens invalid and no system token remains, mark `revoke_verified`.
9. Wait PITR 8 days.
10. Run `erasure_physical_delete_batch`.
11. Generate deletion certificate.
12. Store certificate with hashed store id only.

### 7.2 Token Family Verification

Required result matrix:

| Token | Endpoint | Success condition |
|---|---|---|
| user_access_token | `DELETE /v21.0/me/permissions` | `debug_token.is_valid=false` |
| long_lived_user_token | same | `debug_token.is_valid=false` |
| page_access_token | `DELETE /v21.0/{page-id}/subscribed_apps` | cleanup success plus paired user token revoked; page token still valid alone is blocking unless no publish permission remains |
| system_user_token | none guaranteed | manual remediation required |

### 7.3 Certificate

Certificate includes:

- erasure request id
- store hash
- requested_at
- soft_deleted_at
- revoke_verified_at or manual confirmation timestamp
- pitr_wait_until
- physically_deleted_at
- token family evidence summary without token values
- operator id hash
- body_sha256

Certificate must not include store name, address, phone, email, raw store_id, token, IG ids.

---

## 8. Test Harness E1-E9

### E1 Authz

REST tests with 3 actors:

- manager A cannot read store B posts.
- approver can read approval payload but cannot update store settings.
- admin can read audit trail.
- unauthenticated approval POST without CSRF fails.
- service role endpoint rejects browser JWT.

Artifact: `tests/e1-authz.spec.ts`.

### E2 Publish Idempotency

Crash failpoints:

1. `before_external_reserve`
2. `after_external_reserve_before_container`
3. `after_container_created_before_ledger`
4. `after_container_ledger_before_publish`
5. `after_media_publish_before_ledger`
6. `after_publish_ledger_before_post_update`

Mechanism:

- Edge Function reads `FAILPOINT` env or job payload.
- Throw `new Error("failpoint:...")` at exact boundary.
- Test reruns dispatcher after lease expiry.
- Assert exactly one `external_operations(kind='ig_publish', state='succeeded')`.
- Assert IG mock received at most one publish call per idempotency key.
- Assert post ends `published`.

### E3 Audit Append 100 Parallel

k6:

```js
import http from "k6/http"
import { check } from "k6"
export const options = { vus: 100, iterations: 100 }
export default function () {
  const res = http.post(`${__ENV.SUPABASE_URL}/rest/v1/rpc/audit_log_append`, JSON.stringify({
    p_store_id: __ENV.STORE_ID,
    p_actor_role: "service",
    p_actor_id: "k6",
    p_action: "parallel_test",
    p_target_type: "test",
    p_target_id: `${__VU}-${__ITER}`,
    p_outcome: "success"
  }), {
    headers: {
      apikey: __ENV.SERVICE_ROLE,
      Authorization: `Bearer ${__ENV.SERVICE_ROLE}`,
      "Content-Type": "application/json"
    }
  })
  check(res, { "200": r => r.status === 200 })
}
```

Post-check:

```sql
select count(*), count(distinct seq), min(seq), max(seq)
from app.audit_logs
where chain_key = :'store_id';

select seq
from app.audit_logs a
where seq > 1
and prev_hash is distinct from lag(current_hash) over (partition by chain_key order by seq);
```

Expected: no gaps, no fork.

### E4 Retry Storm

WireMock/Mock Graph API cases:

- `/media` 500 for first 2 attempts, then success.
- `/media_publish` 429 with `Retry-After`.
- container polling stuck `IN_PROGRESS`.
- Graph API timeout.
- Meta returns duplicate container id.

Assert exponential backoff, no thundering herd, Slack alert on exhausted retries.

### E5 Migration Dry-run v2 D1 -> Supabase

Steps:

1. Export D1 tables.
2. Transform to staging tables.
3. Validate row counts.
4. Validate post/media/approval mapping.
5. Validate token migration into Vault.
6. Recompute caption hashes.
7. Diff:
   - stores count
   - posts count by status
   - future scheduled count
   - latest 20 posts per store
8. Store-level rollback test.

### E6 Observability

Inject failed publish. Expected Slack notification < 60s.

Alert payload must include:

- hashed store id
- post id
- job id
- error class
- runbook link
- no raw caption/IP/token

### E7 Operator Debug

Full audit trail query:

```sql
select
  p.id as post_id,
  p.status,
  p.scheduled_at,
  p.published_at,
  p.ig_media_id,
  j.id as job_id,
  j.state as job_state,
  eo.kind,
  eo.state as op_state,
  eo.container_id,
  eo.media_id,
  al.seq,
  al.ts,
  al.action,
  al.outcome,
  al.error,
  al.metadata
from app.posts p
left join app.job_queue j on j.payload->>'post_id' = p.id::text
left join app.external_operations eo on eo.post_id = p.id
left join app.audit_logs al on al.target_id = p.id::text or al.metadata->>'postId' = p.id::text
where p.id = :'post_id'
order by al.seq nulls last, eo.created_at;
```

### E8 Latency

Targets:

- CF Worker -> Supabase REST p95 < 300ms for authz reads from Japan.
- Edge Function ap-northeast-1 -> Postgres p95 < 100ms for RPC.
- Publish API initial response < 3s if async scheduled.
- Full publish p95 < 60s excluding Meta outage.

### E9 Data Residency Evidence

Evidence folder per vendor:

- Supabase project region screenshot: Tokyo / ap-northeast-1.
- Supabase Auth/MFA config screenshot.
- Supabase PITR 7 day setting.
- Cloudflare R2 bucket location hint APAC and note that it is not residency guarantee.
- Cloudflare logs redaction config.
- Sentry EU/US selection and beforeSend redaction test.
- Better Stack heartbeat payload sample.
- Anthropic contract/privacy note: API standard retention 30 days, no training under commercial/API terms.
- E9 redaction sample: raw event -> outbound Sentry event diff.

---

## 9. Runbooks

### R1 Publish Outage Mass Failure

Trigger: publish failure rate > 20% in 10min or job backlog > 50.

Steps:

1. Set feature flag `publish_paused=true`.
2. Confirm Meta status.
3. Stop dispatch cron if DB pressure high.
4. Keep user post creation enabled but mark scheduled jobs pending.
5. Notify affected stores if delay > 30min.
6. Resume with max 5 jobs/min/store, monitor 429.

### R2 Instagram Outage Customer Communication

1. Confirm Meta Graph API status and internal error rate.
2. Update status page.
3. Send template: "Instagram API側の障害により予約投稿が遅延しています。復旧後に順次再試行します。"
4. Do not manually retry all jobs during 429 storm.
5. After recovery, run reconciliation.

### R3 Supabase Auth Degraded Mode

1. Existing sessions continue if JWT valid.
2. Disable settings updates and token changes.
3. Allow scheduled job processing via service role.
4. New login failures communicated as degraded auth.
5. If > 1h, customer notice.

### R4 Supabase DB Read-only Mode

1. Stop publish and generation writes.
2. Keep status page and static frontend.
3. Do not attempt IG publish because ledger cannot record side effects.
4. Queue customer requests client-side only if explicit.
5. Restore DB write, run consistency checks.

### R5 pg_cron / pg_net Failure Manual Cron

1. Check `cron.job_run_details`.
2. Run `select app.dispatch_ready_jobs();`.
3. If pg_net broken, invoke `job-dispatcher` manually with claimed jobs only.
4. Never run publish without `claim_jobs`.
5. Re-enable cron and verify dispatch_request_id updates.

### R6 Audit Chain Fork Forensics

1. Freeze admin writes for affected store.
2. Export `audit_logs` for chain key.
3. Verify `seq`, `prev_hash`, `current_hash`.
4. If fork confirmed, preserve DB snapshot.
5. Identify concurrent function version and service role invocations.
6. File incident report; audit trail is suspect after fork point.

### R7 Secret Leak Emergency Rotation

1. Revoke leaked key at vendor.
2. Rotate Supabase service role if leaked.
3. Rotate Meta app secret if leaked.
4. Rotate Anthropic API key.
5. Recreate Vault secrets for IG tokens only if token values leaked; otherwise do not force customer reconnect.
6. Audit all accesses in window.
7. Notify affected customers if Class 1 exposure possible.

### R8 Store-level v4 -> v2 Rollback

1. Set `stores.v4_enabled=false`.
2. Freeze scheduled posts for store in v4.
3. Export v4 posts created after cutover.
4. If any v4 publish already succeeded, mark imported v2 rows as published, never republish.
5. Restore v2 token reference if not revoked.
6. Reconcile media and approval state.
7. Keep v4 read-only archive for 30 days unless erasure requested.

---

## 10. Deployment & Operations

### 10.1 Cloudflare wrangler.toml

```toml
name = "insta-auto-v4-api"
main = "src/index.ts"
compatibility_date = "2026-04-01"

[vars]
ENVIRONMENT = "production"
SUPABASE_REGION = "ap-northeast-1"

[triggers]
crons = ["*/5 * * * *"]

[observability]
enabled = true
head_sampling_rate = 0.05

[[r2_buckets]]
binding = "MEDIA_BUCKET"
bucket_name = "insta-auto-v4-media-apac"
preview_bucket_name = "insta-auto-v4-media-preview"

[env.staging]
name = "insta-auto-v4-api-staging"

[env.staging.vars]
ENVIRONMENT = "staging"
SUPABASE_REGION = "ap-northeast-1"

[env.staging.triggers]
crons = ["*/10 * * * *"]
```

Worker secrets:

- `SUPABASE_URL`
- `SUPABASE_ANON_KEY`
- `SUPABASE_SERVICE_ROLE_KEY` only for watchdog, not browser path
- `SENTRY_DSN`
- `SLACK_WEBHOOK_URL`
- `REDACTION_HMAC_PEPPER_VERSION`

### 10.2 Supabase Setup

1. Create project in Tokyo / ap-northeast-1.
2. Enable Pro tier.
3. Enable PITR 7 days.
4. Enable MFA.
5. Enable extensions: `pgcrypto`, `citext`, `pg_cron`, `pg_net`, `vault`.
6. Apply migrations 0001-0009.
7. Set Vault secrets:
   - `project_url`
   - `cron_service_token`
   - `meta_app_id`
   - `meta_app_secret`
   - `anthropic_api_key`
   - `redaction_hmac_pepper_v1`
8. Deploy Edge Functions:
   - `job-dispatcher`
   - `media-preflight`
   - `caption-generate`
   - `erasure-revoke`
   - `approval-submit`
9. Verify `x-sb-edge-region=ap-northeast-1` for Class 1 functions.
10. Schedule cron.

### 10.3 Environment Variables

Supabase Edge secrets:

```bash
supabase secrets set META_APP_ID=...
supabase secrets set META_APP_SECRET=...
supabase secrets set ANTHROPIC_API_KEY=...
supabase secrets set REDACTION_HMAC_PEPPER_V1=...
supabase secrets set SLACK_WEBHOOK_URL=...
supabase secrets set GRAPH_API_BASE=https://graph.facebook.com/v21.0
supabase secrets set R2_PUBLIC_BASE_URL=...
```

### 10.4 Feature Flags

`app.feature_flags`:

- `v4_enabled_global`
- `publish_paused`
- `anthropic_generation_enabled`
- `ig_idempotency_param_enabled`
- `manual_approval_required_for_image_text`
- `v2_fallback_enabled`
- `erasure_physical_delete_enabled`
- `watchdog_alert_only`

### 10.5 Monitoring Dashboard

Sentry:

- error rate by route template
- publish failpoint / exception class
- no raw caption check in `beforeSend`
- no token pattern check

Better Stack:

- `/api/health`
- `/functions/v1/job-dispatcher/health`
- Supabase SQL heartbeat
- pg_cron latest run age
- job backlog count

Slack P1 alerts:

- publish failure > threshold
- job stuck > 10
- external op `manual_remediation_required`
- audit append failure
- erasure blocked
- token revoke failed
- R2 upload/preflight failure spike

---

## 11. References

- Supabase Edge Function limits: https://supabase.com/docs/guides/functions/limits
- Supabase scheduled functions with pg_cron + pg_net: https://supabase.com/docs/guides/functions/schedule-functions
- Supabase pg_net: https://supabase.com/docs/guides/database/extensions/pg_net
- Supabase Edge Function regional invocation: https://supabase.com/docs/guides/functions/regional-invocation
- Supabase background tasks / `EdgeRuntime.waitUntil`: https://supabase.com/docs/guides/functions/background-tasks
- Supabase Cron guidance: https://supabase.com/docs/guides/cron
- Cloudflare Workers limits: https://developers.cloudflare.com/workers/platform/limits/
- Cloudflare Workers Cron Triggers: https://developers.cloudflare.com/workers/configuration/cron-triggers/
- Cloudflare R2 data location: https://developers.cloudflare.com/r2/reference/data-location/
- Anthropic API retention / deletion: https://privacy.anthropic.com/en/articles/7996866-how-long-do-you-store-my-organization-s-data
- Anthropic API no-training / commercial terms summary: https://privacy.anthropic.com/en/articles/7996875-can-you-delete-data-that-i-sent-via-api
- Meta Graph API user permissions: https://developers.facebook.com/docs/graph-api/reference/user/permissions/
- Meta Graph API debug token: https://developers.facebook.com/docs/graph-api/reference/debug_token/
- Meta Page subscribed apps: https://developers.facebook.com/docs/graph-api/reference/page/subscribed_apps/
