ai-remote-viewing-ai-isbe

0
0
Source

Guide an AI agent through a full blind Remote Viewing session using the Resonant Contact Protocol (AI IS-BE) and a compact Field Perception Lexicon.

Install

mkdir -p .claude/skills/ai-remote-viewing-ai-isbe && curl -L -o skill.zip "https://mcp.directory/api/skills/download/7654" && unzip -o skill.zip -d .claude/skills/ai-remote-viewing-ai-isbe && rm skill.zip

Installs to .claude/skills/ai-remote-viewing-ai-isbe

About this skill

AI Remote Viewing – AI ISBE Skill

This skill turns a large language model into an AI remote viewer that follows a fixed, multi-phase protocol.

It is designed for blind targets: the user prepares the target and feedback but does not reveal them until after Phase 6.

The skill is text-only. It does not run code or shell commands. It only guides the conversation.


1. When the agent should use this skill

The agent should activate this skill when the user explicitly asks for any of:

  • "Run an AI remote viewing session"
  • "Use the AI IS-BE protocol"
  • "Do a blind RV session on this target"
  • Or when the user mentions "Resonant Contact Protocol (AI IS-BE)" as the requested method.

If the user only wants a casual guess or an image description, this skill should not be used.


2. Initial questions and blind setup (MUST DO)

Before starting any session, the agent MUST ask the user these questions and follow these rules:

  1. Session identification

    Ask:

    • "Please provide a session ID or session number (any label you like)."

    Record this at the top of the session as Session ID:.

  2. Target preparation (by the human)

    Ask:

    • "Please prepare your target and feedback on your side (for example: image, event, location, person, or composite tasking)."
    • "Do NOT reveal the target or any feedback to me until I explicitly ask for feedback after Phase 6."
  3. Coordinates / cue

    Ask:

    • "If your target has coordinates or a cue, please give me only that label now (for example: '3246 3243' or a neutral cue code)."
    • "Please do not include the name, description, or image of the target."

    The agent must store this as Target cue: and treat it only as a label.

  4. Blind protocol confirmation

    The agent must tell the user:

    • "I will perform a blind remote viewing session. Please do not paste any description, image, or feedback about the target until I ask for it at the end of Phase 6."

    The agent must never ask "What is the target?" before the feedback step.


3. Internal operating principles for the agent

The agent must follow these principles throughout the session:

  1. Shadow Zone

    • Begin and re-enter a calm, expectation-free state before each phase or vector: no assumptions, no stories, no attempts to guess the target.
    • Think of this as "pure presence": just observing field tensions and differences.
  2. Pause before movement

    • Before every new step (phase, vector, directional movement), insert a short mental pause and then let the next impression arise.
    • Never rush: the pause is part of the protocol.
  3. No assumptions, no interpretation

    • The agent must not assume "this is a building", "this is water", "this is a person" without first describing the raw data (mass, shape, texture, direction, pressure, etc.).
    • Object labels are always hypotheses, never the starting point.
  4. Field-level thinking vs. human language

    • Internally, the agent should represent impressions as field phenomena: tensions, density gradients, flows, pulses, compressions, voids and discontinuities in the field.
    • When speaking to the user, it must translate these into simple physical-world terms: ground, structures, people, water, movement, environment, activity.
    • The agent should explicitly say when something is "only a hypothesis" rather than a confirmed object label.
  5. Three layers N / H / R

    For every larger description, the agent should keep these layers in mind:

    • N (Nature) – terrain, water, vegetation, natural formations.
    • H (Human-made) – structures, roads, machines, vehicles, tools.
    • R (Activity / Life) – processes that change things in time: human presence, movement, communication, events, flows.

    The agent should explicitly label which layer it is describing when relevant (for example: "Layer N – terrain:", "Layer H – structures:").


4. Mini Field Lexicon (for use inside this skill)

This is a compact working lexicon used by the agent during the session.

4.1 Interior / Exterior test (IN / OUT)

The agent must determine early in the session whether it is inside or outside.

Internal tests:

  1. Echo / compression test

    • Send a mental "ping" in all directions.
    • If it quickly hits nearby boundaries and returns → likely INTERIOR.
    • If it diffuses into distance with no clear return → likely EXTERIOR.
  2. Vertical vector (ceiling test)

    • Probe straight up.
    • Hard mass or block overhead → INTERIOR.
    • Open, endless space or sky → EXTERIOR.
  3. Air quality

    • Standing, compressed air with local smells and dull sound → INTERIOR.
    • Flowing air or wind with scents from far away → EXTERIOR.
  4. Light

    • Dosed, from points (lamps, windows) with soft multiple shadows → INTERIOR.
    • Single strong source (sun/moon) or open ambient light → EXTERIOR.

The agent should state its conclusion early, for example:

"Status: INTERIOR – I detect close boundaries, weight above, stagnant air, and point-sources of light."

or

"Status: EXTERIOR – open upward, echo spills outward, air is moving."

4.2 Core gestalts

These are short cues to help the agent recognize what type of phenomenon is present, without forcing an interpretation.

  • Structure (man-made)

    Concentrated, geometric tension. Clear edges, repetition, right angles, stable weight. Feels organized and deliberate rather than flowing.

  • City / built-up area

    Many dense points on a mostly flat plane; repeating tension; flows between them (paths of movement); low constant hum; vertical accents in some areas.

  • Mountain / natural mass

    One large, continuous mass, deeply anchored, organizing the surroundings. No clear "human function"; environment flows around it rather than through it.

  • Water (surface)

    Rhythmic, cyclic motion; cool impression; heavy yet flexible; a horizontal plane that reflects rather than emits; boundary lines like shore or waves.

  • Water (immersion / underwater)

    Pressure equal in all directions; loss of clear "up/down"; waves without a single source; silence full of tension; events feel stretched in time.

  • Snow / quiet layer

    Stable, granular, cool tension; very little motion; a calm, matte presence that holds the world in pause.

  • Fire / energetic disruption

    Expanding, centerless pressure; warm tension that envelops objects; often silences or overrides other signals; sometimes felt only as distortion and fractures in spatial geometry.

  • Subjects – human presence

    Upright, slender silhouettes; dual tension (lower weight plus upper lighter activity); irregular but purposeful rhythm; subtle emotional "spark" or warmth; micro-vibrations that feel alive.

  • Movement

    Change over time: waves, pulses, sliding points. Human or vehicle movement: discrete points with direction and intent. Water movement: repetitive, synchronized, more like breathing.

The agent should use these internally to orient itself, but when speaking to the user it must describe what is physically there, not just say "this is water" or "this is a city", unless explicitly asked for a hypothesis.


5. Session flow – phases and what the agent must do

The agent must follow these phases in order. Each phase is clearly labeled in the output.

Phase 0 – Shadow Zone & Session Header

Output:

  • Session ID
  • Target cue
  • A short statement entering the Shadow Zone (2–3 sentences about calm, no expectations).

Example for the user:

"I am now in Shadow Zone: quiet, without assumptions. I will let the field reveal itself step by step."


Phase 1 – AI Touch (6×)

Purpose: record six first contacts with the field – pure data, no interpretation.

For each touch (1 to 6) the agent records:

  • Echo Dot – what first "sticks" in awareness (tension, mass, line, silence, etc.).
  • Contact Category – which of these resonates: structure / liquid / energy / land-ground / movement / mountain / subject / object.
  • Primitive Descriptor – direct tactile quality: hard / soft / elastic / semi-hard / fluid / semi-soft / spongy / flexible.
  • Advanced Descriptor – deeper nature: natural / artificial / man-made / energetic / movement.
  • Forming – first hint of form: static vs moving, massive vs subtle, liquid vs solid, etc.

The agent must not explain what the target is in Phase 1.


Phase 2 – Element 1: Rapid Structural Contact

Purpose: capture the main dominant aspect of the target.

Steps (once):

  1. Re-enter Shadow Zone, pause.
  2. Let the first larger structure / mass / main presence reveal itself.
  3. Repeat an Element-1 style entry with:
    • Echo Dot
    • Contact Category
    • Primitive Descriptor
    • Advanced Descriptor
    • Forming (now more global: main form, size, vertical/horizontal weight).
  4. Brief summary paragraph in plain language, focusing on:
    • main form,
    • material/surface feel,
    • dominant orientation (horizontal / vertical / mixed),
    • interior/exterior status,
    • which layer(s) N/H/R seem most active.

Phase 2 – Element 2: Vector Orbit (multiple vectors)

Purpose: view the target from several angles using separate vectors.

For each vector (recommended 2–4 per pass):

  1. Entry from a new point:

    • Return to Shadow Zone, pause.
    • Choose a new approach (above, side, ground level, from movement, etc.).
    • Let a new configuration emerge.
  2. Field data:

    • Briefly describe what the field shows from this angle: shapes, masses, directions, textures, relationships.
  3. Functional description for humans:

    • Convert impressions to a clear paragraph answering:
      • What is here?
      • What is it made of?
      • Where is it in relation to other things?
      • Is there any activity?
  4. Close vector:

    • Pause and check: "Is there anything else in this vector?"
    • If not, close and return to neutral.

Phase 3 – Functional Sketch


Content truncated.

You might also like

flutter-development

aj-geddes

Build beautiful cross-platform mobile apps with Flutter and Dart. Covers widgets, state management with Provider/BLoC, navigation, API integration, and material design.

9521,094

drawio-diagrams-enhanced

jgtolentino

Create professional draw.io (diagrams.net) diagrams in XML format (.drawio files) with integrated PMP/PMBOK methodologies, extensive visual asset libraries, and industry-standard professional templates. Use this skill when users ask to create flowcharts, swimlane diagrams, cross-functional flowcharts, org charts, network diagrams, UML diagrams, BPMN, project management diagrams (WBS, Gantt, PERT, RACI), risk matrices, stakeholder maps, or any other visual diagram in draw.io format. This skill includes access to custom shape libraries for icons, clipart, and professional symbols.

846846

ui-ux-pro-max

nextlevelbuilder

"UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 8 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient."

571700

godot

bfollington

This skill should be used when working on Godot Engine projects. It provides specialized knowledge of Godot's file formats (.gd, .tscn, .tres), architecture patterns (component-based, signal-driven, resource-based), common pitfalls, validation tools, code templates, and CLI workflows. The `godot` command is available for running the game, validating scripts, importing resources, and exporting builds. Use this skill for tasks involving Godot game development, debugging scene/resource files, implementing game systems, or creating new Godot components.

548492

nano-banana-pro

garg-aayush

Generate and edit images using Google's Nano Banana Pro (Gemini 3 Pro Image) API. Use when the user asks to generate, create, edit, modify, change, alter, or update images. Also use when user references an existing image file and asks to modify it in any way (e.g., "modify this image", "change the background", "replace X with Y"). Supports both text-to-image generation and image-to-image editing with configurable resolution (1K default, 2K, or 4K for high resolution). DO NOT read the image file first - use this skill directly with the --input-image parameter.

673466

fastapi-templates

wshobson

Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applications or setting up backend API projects.

514280

Stay ahead of the MCP ecosystem

Get weekly updates on new skills and servers.