Tool Signatures
This page provides the complete API reference for all 35+ tools available in the VFX MCP Server. Each tool signature includes parameters, types, default values, and return types.
🏷️ Type Definitions
Section titled “🏷️ Type Definitions”Common Types
Section titled “Common Types”Context = Optional[MCP Context object for progress reporting]PathStr = str # File path (absolute or relative to working directory)QualityLevel = Literal["low", "medium", "high", "ultra"]VideoFormat = Literal["mp4", "avi", "mov", "mkv", "webm"]AudioFormat = Literal["mp3", "wav", "aac", "flac"]
📹 Basic Operations
Section titled “📹 Basic Operations”trim_video
Section titled “trim_video”Extract specific segments from video files.
def trim_video( input_path: PathStr, output_path: PathStr, start_time: float, duration: Optional[float] = None, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output trimmed video pathstart_time
: Start time in secondsduration
: Duration in seconds (optional, trims to end if not specified)ctx
: Optional progress reporting context
Returns: Success message with output file path
get_video_info
Section titled “get_video_info”Retrieve comprehensive video metadata.
def get_video_info( video_path: PathStr, ctx: Context = None) -> Dict[str, Any]
Parameters:
video_path
: Path to video file to analyzectx
: Optional progress reporting context
Returns: Dictionary with video metadata:
{ "format": str, "duration": float, "width": int, "height": int, "fps": float, "video_codec": str, "audio_codec": str, "bitrate": int, "file_size": str, "has_audio": bool}
resize_video
Section titled “resize_video”Change video resolution with quality control.
def resize_video( input_path: PathStr, output_path: PathStr, width: Optional[int] = None, height: Optional[int] = None, scale_factor: Optional[float] = None, maintain_aspect: bool = True, quality: QualityLevel = "medium", ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output resized video pathwidth
: Target width in pixels (optional)height
: Target height in pixels (optional)scale_factor
: Scaling factor (e.g., 0.5 for half size)maintain_aspect
: Whether to maintain aspect ratioquality
: Encoding quality levelctx
: Optional progress reporting context
Returns: Success message with output file path
concatenate_videos
Section titled “concatenate_videos”Join multiple video files seamlessly.
def concatenate_videos( input_paths: List[PathStr], output_path: PathStr, transition_duration: float = 0.0, ctx: Context = None) -> str
Parameters:
input_paths
: List of video file paths to concatenateoutput_path
: Output concatenated video pathtransition_duration
: Crossfade transition duration in secondsctx
: Optional progress reporting context
Returns: Success message with output file path
🎵 Audio Processing
Section titled “🎵 Audio Processing”extract_audio
Section titled “extract_audio”Extract audio tracks from video files.
def extract_audio( input_path: PathStr, output_path: PathStr, format: AudioFormat = "mp3", quality: QualityLevel = "medium", sample_rate: Optional[int] = None, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output audio file pathformat
: Output audio formatquality
: Audio quality levelsample_rate
: Target sample rate in Hz (optional)ctx
: Optional progress reporting context
Returns: Success message with output file path
add_audio
Section titled “add_audio”Add or replace audio tracks in videos.
def add_audio( video_path: PathStr, audio_path: PathStr, output_path: PathStr, mode: Literal["replace", "mix"] = "replace", volume: float = 1.0, audio_offset: float = 0.0, fade_in: float = 0.0, fade_out: float = 0.0, ctx: Context = None) -> str
Parameters:
video_path
: Source video file pathaudio_path
: Audio file to addoutput_path
: Output video with new audiomode
: “replace” or “mix” audio modevolume
: Volume multiplier (1.0 = original)audio_offset
: Audio delay in secondsfade_in
: Fade-in duration in secondsfade_out
: Fade-out duration in secondsctx
: Optional progress reporting context
Returns: Success message with output file path
extract_audio_spectrum
Section titled “extract_audio_spectrum”Generate visual audio spectrum videos.
def extract_audio_spectrum( input_path: PathStr, output_path: PathStr, visualization_type: Literal["spectrum", "waveform", "bars", "circle"] = "spectrum", width: int = 1920, height: int = 1080, color_scheme: Literal["rainbow", "fire", "cool", "mono"] = "rainbow", sensitivity: float = 1.0, ctx: Context = None) -> str
Parameters:
input_path
: Source video or audio file pathoutput_path
: Output visualization video pathvisualization_type
: Type of audio visualizationwidth
: Output video width in pixelsheight
: Output video height in pixelscolor_scheme
: Color scheme for visualizationsensitivity
: Audio sensitivity multiplierctx
: Optional progress reporting context
Returns: Success message with output file path
merge_audio_tracks
Section titled “merge_audio_tracks”Merge multiple audio tracks with timing control.
def merge_audio_tracks( input_paths: List[PathStr], output_path: PathStr, volumes: Optional[List[float]] = None, delays: Optional[List[float]] = None, crossfade_duration: float = 0.0, output_format: AudioFormat = "mp3", ctx: Context = None) -> str
Parameters:
input_paths
: List of audio file paths to mergeoutput_path
: Output merged audio file pathvolumes
: Volume levels for each track (defaults to 1.0)delays
: Delay in seconds for each track (defaults to 0.0)crossfade_duration
: Crossfade duration between tracksoutput_format
: Output audio formatctx
: Optional progress reporting context
Returns: Success message with output file path
🎨 Effects & Filters
Section titled “🎨 Effects & Filters”apply_filter
Section titled “apply_filter”Apply custom FFmpeg filters.
def apply_filter( input_path: PathStr, output_path: PathStr, filter_string: str, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output filtered video pathfilter_string
: FFmpeg filter expressionctx
: Optional progress reporting context
Returns: Success message with output file path
change_speed
Section titled “change_speed”Adjust video playback speed.
def change_speed( input_path: PathStr, output_path: PathStr, speed_factor: float, preserve_audio: bool = True, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output speed-adjusted video pathspeed_factor
: Speed multiplier (0.5 = half speed, 2.0 = double speed)preserve_audio
: Whether to maintain audio pitchctx
: Optional progress reporting context
Returns: Success message with output file path
apply_color_grading
Section titled “apply_color_grading”Professional color correction and grading.
def apply_color_grading( input_path: PathStr, output_path: PathStr, brightness: float = 0.0, contrast: float = 1.0, saturation: float = 1.0, gamma: float = 1.0, temperature: float = 0.0, tint: float = 0.0, shadows: float = 0.0, highlights: float = 0.0, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output color-graded video pathbrightness
: Brightness adjustment (-1.0 to 1.0)contrast
: Contrast multiplier (0.0 to 3.0)saturation
: Saturation multiplier (0.0 to 3.0)gamma
: Gamma correction (0.1 to 3.0)temperature
: Color temperature (-1.0 warm to 1.0 cool)tint
: Green/magenta tint (-1.0 to 1.0)shadows
: Shadow adjustment (-1.0 to 1.0)highlights
: Highlight adjustment (-1.0 to 1.0)ctx
: Optional progress reporting context
Returns: Success message with output file path
apply_motion_blur
Section titled “apply_motion_blur”Add realistic motion blur effects.
def apply_motion_blur( input_path: PathStr, output_path: PathStr, angle: float = 0.0, distance: float = 5.0, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output motion-blurred video pathangle
: Blur direction in degrees (0-360)distance
: Blur distance in pixelsctx
: Optional progress reporting context
Returns: Success message with output file path
apply_video_stabilization
Section titled “apply_video_stabilization”Advanced video stabilization.
def apply_video_stabilization( input_path: PathStr, output_path: PathStr, smoothing: float = 10.0, crop_black: bool = True, ctx: Context = None) -> str
Parameters:
input_path
: Source shaky video file pathoutput_path
: Output stabilized video pathsmoothing
: Smoothing strength (1.0 to 100.0)crop_black
: Whether to crop black bordersctx
: Optional progress reporting context
Returns: Success message with output file path
🔄 Format Conversion
Section titled “🔄 Format Conversion”generate_thumbnail
Section titled “generate_thumbnail”Extract video frames as image thumbnails.
def generate_thumbnail( input_path: PathStr, output_path: PathStr, timestamp: float = 0.0, width: Optional[int] = None, height: Optional[int] = None, format: Literal["jpg", "png", "bmp"] = "jpg", ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output thumbnail image pathtimestamp
: Time in seconds to extract framewidth
: Thumbnail width in pixels (optional)height
: Thumbnail height in pixels (optional)format
: Output image formatctx
: Optional progress reporting context
Returns: Success message with output file path
convert_format
Section titled “convert_format”Convert between video formats.
def convert_format( input_path: PathStr, output_path: PathStr, video_codec: Optional[str] = None, audio_codec: Optional[str] = None, bitrate: Optional[str] = None, quality: QualityLevel = "medium", ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output converted video pathvideo_codec
: Target video codec (e.g., “libx264”, “libx265”)audio_codec
: Target audio codec (e.g., “aac”, “mp3”)bitrate
: Target bitrate (e.g., “2M”, “500k”)quality
: Encoding quality levelctx
: Optional progress reporting context
Returns: Success message with output file path
🎬 Advanced Operations
Section titled “🎬 Advanced Operations”create_video_slideshow
Section titled “create_video_slideshow”Generate slideshows from images.
def create_video_slideshow( image_folder: PathStr, output_path: PathStr, duration_per_image: float = 3.0, transition_duration: float = 1.0, fps: int = 30, resolution: str = "1920x1080", ctx: Context = None) -> str
Parameters:
image_folder
: Directory containing imagesoutput_path
: Output slideshow video pathduration_per_image
: Display time per image in secondstransition_duration
: Transition duration between imagesfps
: Frame rate of output videoresolution
: Output resolution (e.g., “1920x1080”)ctx
: Optional progress reporting context
Returns: Success message with output file path
create_video_mosaic
Section titled “create_video_mosaic”Create multi-video grid layouts.
def create_video_mosaic( input_paths: List[PathStr], output_path: PathStr, layout: str = "2x2", width: int = 1920, height: int = 1080, border_width: int = 2, border_color: str = "black", ctx: Context = None) -> str
Parameters:
input_paths
: List of video file paths for mosaicoutput_path
: Output mosaic video pathlayout
: Grid layout (e.g., “2x2”, “3x1”, “4x2”)width
: Output video widthheight
: Output video heightborder_width
: Border width between videos in pixelsborder_color
: Border color name or hex codectx
: Optional progress reporting context
Returns: Success message with output file path
create_picture_in_picture
Section titled “create_picture_in_picture”Create picture-in-picture overlays.
def create_picture_in_picture( main_video: PathStr, overlay_video: PathStr, output_path: PathStr, position: str = "top-right", scale: float = 0.25, opacity: float = 1.0, x_offset: int = 10, y_offset: int = 10, ctx: Context = None) -> str
Parameters:
main_video
: Main background video pathoverlay_video
: Overlay video pathoutput_path
: Output picture-in-picture video pathposition
: Overlay position (“top-left”, “top-right”, “bottom-left”, “bottom-right”, “center”)scale
: Overlay scale factor (0.1 to 1.0)opacity
: Overlay opacity (0.0 to 1.0)x_offset
: Horizontal offset from position in pixelsy_offset
: Vertical offset from position in pixelsctx
: Optional progress reporting context
Returns: Success message with output file path
📊 Analysis & Extraction
Section titled “📊 Analysis & Extraction”extract_frames
Section titled “extract_frames”Extract individual frames as images.
def extract_frames( input_path: PathStr, output_directory: PathStr, interval: float = 1.0, start_time: float = 0.0, end_time: Optional[float] = None, format: Literal["jpg", "png", "bmp"] = "jpg", ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_directory
: Directory to save extracted framesinterval
: Time interval between frames in secondsstart_time
: Start time for extraction in secondsend_time
: End time for extraction in seconds (optional)format
: Output image formatctx
: Optional progress reporting context
Returns: Success message with frame count and directory
detect_scene_changes
Section titled “detect_scene_changes”Automatic scene change detection.
def detect_scene_changes( input_path: PathStr, sensitivity: float = 0.3, min_scene_duration: float = 1.0, output_format: Literal["json", "csv", "txt"] = "json", ctx: Context = None) -> Dict[str, Any]
Parameters:
input_path
: Source video file pathsensitivity
: Detection sensitivity (0.1 to 1.0)min_scene_duration
: Minimum scene duration in secondsoutput_format
: Format for scene data exportctx
: Optional progress reporting context
Returns: Dictionary with scene change timestamps and metadata
extract_video_statistics
Section titled “extract_video_statistics”Comprehensive technical analysis.
def extract_video_statistics( input_path: PathStr, include_quality_metrics: bool = True, include_motion_analysis: bool = False, ctx: Context = None) -> Dict[str, Any]
Parameters:
input_path
: Source video file pathinclude_quality_metrics
: Whether to calculate quality metricsinclude_motion_analysis
: Whether to analyze motion patternsctx
: Optional progress reporting context
Returns: Dictionary with comprehensive video statistics
extract_dominant_colors
Section titled “extract_dominant_colors”Color palette analysis for videos.
def extract_dominant_colors( input_path: PathStr, num_colors: int = 5, sample_interval: float = 5.0, output_format: Literal["json", "image"] = "json", ctx: Context = None) -> Dict[str, Any]
Parameters:
input_path
: Source video file pathnum_colors
: Number of dominant colors to extractsample_interval
: Sampling interval in secondsoutput_format
: Output format for color datactx
: Optional progress reporting context
Returns: Dictionary with dominant colors and usage statistics
📝 Text & Graphics
Section titled “📝 Text & Graphics”add_text_overlay
Section titled “add_text_overlay”Add text overlays to videos.
def add_text_overlay( input_path: PathStr, output_path: PathStr, text: str, font_size: int = 24, font_color: str = "white", position: str = "center", x_offset: int = 0, y_offset: int = 0, start_time: float = 0.0, duration: Optional[float] = None, background_color: Optional[str] = None, font_family: str = "Arial", ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output video with text overlaytext
: Text content to overlayfont_size
: Font size in pointsfont_color
: Text color name or hex codeposition
: Text position (“top”, “center”, “bottom”, “top-left”, etc.)x_offset
: Horizontal offset from positiony_offset
: Vertical offset from positionstart_time
: When text appears in secondsduration
: How long text appears (optional, full video if not specified)background_color
: Background color for text (optional)font_family
: Font family namectx
: Optional progress reporting context
Returns: Success message with output file path
create_animated_text
Section titled “create_animated_text”Create animated text effects.
def create_animated_text( output_path: PathStr, text: str, animation_type: Literal["fade_in", "slide_in", "typewriter", "bounce"] = "fade_in", duration: float = 5.0, font_size: int = 48, font_color: str = "white", background_color: str = "black", width: int = 1920, height: int = 1080, ctx: Context = None) -> str
Parameters:
output_path
: Output animated text video pathtext
: Text content to animateanimation_type
: Type of text animationduration
: Total animation duration in secondsfont_size
: Font size in pointsfont_color
: Text color name or hex codebackground_color
: Background color name or hex codewidth
: Output video widthheight
: Output video heightctx
: Optional progress reporting context
Returns: Success message with output file path
🎭 Specialized Effects
Section titled “🎭 Specialized Effects”create_green_screen_effect
Section titled “create_green_screen_effect”Chroma key compositing.
def create_green_screen_effect( foreground_path: PathStr, background_path: PathStr, output_path: PathStr, key_color: str = "green", threshold: float = 0.3, smoothing: float = 0.1, spill_suppression: float = 0.1, ctx: Context = None) -> str
Parameters:
foreground_path
: Video with green/blue screenbackground_path
: Background video or imageoutput_path
: Output composited videokey_color
: Color to key out (“green”, “blue”, or hex code)threshold
: Keying threshold (0.0 to 1.0)smoothing
: Edge smoothing amountspill_suppression
: Color spill suppressionctx
: Optional progress reporting context
Returns: Success message with output file path
create_particle_system
Section titled “create_particle_system”Generate particle effects.
def create_particle_system( output_path: PathStr, particle_type: Literal["snow", "rain", "sparks", "smoke"] = "snow", intensity: float = 1.0, duration: float = 10.0, width: int = 1920, height: int = 1080, background_color: str = "black", ctx: Context = None) -> str
Parameters:
output_path
: Output particle effect video pathparticle_type
: Type of particle systemintensity
: Particle density/intensity (0.1 to 5.0)duration
: Effect duration in secondswidth
: Output video widthheight
: Output video heightbackground_color
: Background color (transparent if “none”)ctx
: Optional progress reporting context
Returns: Success message with output file path
apply_3d_transforms
Section titled “apply_3d_transforms”Apply 3D perspective transforms.
def apply_3d_transforms( input_path: PathStr, output_path: PathStr, rotation_x: float = 0.0, rotation_y: float = 0.0, rotation_z: float = 0.0, scale_x: float = 1.0, scale_y: float = 1.0, perspective: float = 0.0, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output transformed video pathrotation_x
: X-axis rotation in degreesrotation_y
: Y-axis rotation in degreesrotation_z
: Z-axis rotation in degreesscale_x
: X-axis scale factorscale_y
: Y-axis scale factorperspective
: Perspective distortion amountctx
: Optional progress reporting context
Returns: Success message with output file path
apply_lens_effects
Section titled “apply_lens_effects”Camera lens simulation effects.
def apply_lens_effects( input_path: PathStr, output_path: PathStr, effect_type: Literal["vignette", "lens_flare", "chromatic_aberration", "barrel_distortion"] = "vignette", intensity: float = 0.5, center_x: float = 0.5, center_y: float = 0.5, ctx: Context = None) -> str
Parameters:
input_path
: Source video file pathoutput_path
: Output video with lens effectseffect_type
: Type of lens effect to applyintensity
: Effect intensity (0.0 to 1.0)center_x
: Effect center X coordinate (0.0 to 1.0)center_y
: Effect center Y coordinate (0.0 to 1.0)ctx
: Optional progress reporting context
Returns: Success message with output file path
🔄 Batch Processing
Section titled “🔄 Batch Processing”batch_process_videos
Section titled “batch_process_videos”Automated batch processing with configurable operations.
def batch_process_videos( input_directory: PathStr, output_directory: PathStr, operations: List[Dict[str, Any]], file_pattern: str = "*.mp4", overwrite_existing: bool = False, ctx: Context = None) -> Dict[str, Any]
Parameters:
input_directory
: Directory containing input videosoutput_directory
: Directory for processed videosoperations
: List of operations to apply to each videofile_pattern
: Glob pattern for input filesoverwrite_existing
: Whether to overwrite existing output filesctx
: Optional progress reporting context
Returns: Dictionary with processing results and statistics
📋 Usage Examples
Section titled “📋 Usage Examples”Basic Tool Call Pattern
Section titled “Basic Tool Call Pattern”# Using MCP clientresult = await session.call_tool("tool_name", { "input_path": "source.mp4", "output_path": "output.mp4", "parameter1": value1, "parameter2": value2})
Error Handling
Section titled “Error Handling”try: result = await session.call_tool("trim_video", { "input_path": "video.mp4", "output_path": "trimmed.mp4", "start_time": 0, "duration": 30 }) print(f"Success: {result}")except Exception as e: print(f"Error: {e}")
Progress Reporting
Section titled “Progress Reporting”Tools that support progress reporting will stream updates when a context is provided:
# Progress updates will be automatically streamedresult = await session.call_tool("apply_color_grading", { "input_path": "long_video.mp4", "output_path": "graded.mp4", "contrast": 1.2, "saturation": 1.1 # Progress updates sent via MCP context})
🔗 Related Documentation
Section titled “🔗 Related Documentation”- Tools Overview - High-level tool categorization
- Resource Endpoints - MCP resource API reference
- Common Workflows - Practical usage patterns
- Error Handling - Error types and handling
Questions about specific tool signatures? Check the individual tool category pages for detailed examples and usage scenarios.