-
Notifications
You must be signed in to change notification settings - Fork 2
[TypeScript] - Optimize FFmpeg performance with multi-threading and parallel processing #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
/review |
Changelist by BitoThis pull request implements the following key changes.
|
Interaction Diagram by BitosequenceDiagram
participant User as User
participant App as App (Renderer)<br/>React Component
participant Hook as useFfmpegOperations<br/>🔄 Updated | ●●○ Medium
participant FFmpeg as ffmpeg.ts<br/>🔄 Updated | ●●● High
participant FFmpegOpt as ffmpeg-optimizations.ts<br/>🟩 Added | ●●○ Medium
participant Process as FFmpeg Process
participant Logger as Logger
User->>App: Load video file or trigger export
App->>Hook: Call export/capture operation
Hook->>FFmpeg: Build optimized FFmpeg args
FFmpeg->>FFmpeg: optimizeFFmpegArgs adds performance flags
FFmpeg->>Process: Execute runFfmpegProcess with optimized options
Process->>Process: Initialize with multi-threading (-threads 0)
Process->>Process: Apply I/O optimizations (-fflags, -avioflags)
alt [Progress updates enabled]
Process-->>FFmpeg: Emit progress via stderr
FFmpeg->>FFmpeg: handleProgress throttles updates (50ms)
FFmpeg->>FFmpeg: Skip if progress change < 0.1%
FFmpeg-->>Hook: onProgress callback
Hook-->>App: Update UI progress bar
end
Process-->>FFmpeg: Return process result
FFmpeg-->>Hook: Return stdout/result
Hook-->>App: Render completion or error
App-->>User: Display result or next step
Note over FFmpeg, Process: Performance optimizations reduce overhead<br/>and UI thread blocking
Critical path: User->App->useFfmpegOperations->ffmpeg.ts (MODIFIED)->FFmpeg Process->Logger
If the interaction diagram doesn't appear, refresh the page to render it. You can disable interaction diagrams by customizing agent settings. Refer to documentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review Agent Run #5ff570
Actionable Suggestions - 8
-
typescript_codebase/src/main/ffmpeg-optimizations-fixed.ts - 3
- Code Duplication · Line 1-254
- Invalid FFmpeg Option · Line 1-254
- Broken Concurrency Logic · Line 1-254
-
typescript_codebase/src/main/ffmpeg.ts - 2
- Progress update throttling changes behavior · Line 76-129
- Missing env config in execa options · Line 922-926
-
typescript_codebase/src/renderer/src/hooks/useFfmpegOperations.ts - 1
- Improper Error Type · Line 897-897
-
typescript_codebase/src/main/ffmpeg-optimizations.ts - 2
- Invalid FFmpeg option · Line 18-19
- Concurrency not enforced · Line 79-90
Additional Suggestions - 5
-
typescript_codebase/src/main/ffmpeg-optimizations-fixed.ts - 2
-
Potential Stack Overflow · Line 219-236The recursive processNext function may cause stack overflow with many segments due to deep recursion. Use an iterative approach instead.
-
Misleading Comment · Line 132-136The comment says 'Enable fast seeking' but -accurate_seek enables accurate (not fast) seeking by decoding extra frames. Update comment or verify intent.
-
-
typescript_codebase/src/renderer/src/hooks/useFfmpegOperations.ts - 1
-
Debug Logging · Line 810-811These console.log statements log function parameters and seem like leftover debug code, which should be removed for production.
-
-
typescript_codebase/src/main/ffmpeg.ts - 2
-
Simplified rotate handling · Line 874-874Removed conditional rotate logic and always apply -noautorotate. If rotate was intended to be configurable, add it back as a parameter.
-
Logging behavior changed · Line 920-920Logging was changed from unconditional to conditional on enableLog (which is false), so no logging occurs. If this was unintentional, consider restoring unconditional logging or making enableLog a parameter.
-
Review Details
-
Files reviewed - 4 · Commit Range:
fc44853..fc44853- typescript_codebase/src/main/ffmpeg-optimizations-fixed.ts
- typescript_codebase/src/main/ffmpeg-optimizations.ts
- typescript_codebase/src/main/ffmpeg.ts
- typescript_codebase/src/renderer/src/hooks/useFfmpegOperations.ts
-
Files skipped - 0
-
Tools
- Whispers (Secret Scanner) - ✔︎ Successful
- Detect-secrets (Secret Scanner) - ✔︎ Successful
- Eslint (Linter) - ✔︎ Successful
Bito Usage Guide
Commands
Type the following command in the pull request comment and save the comment.
-
/review- Manually triggers a full AI review. -
/pause- Pauses automatic reviews on this pull request. -
/resume- Resumes automatic reviews. -
/resolve- Marks all Bito-posted review comments as resolved. -
/abort- Cancels all in-progress reviews.
Refer to the documentation for additional commands.
Configuration
This repository uses Default Agent You can customize the agent settings here or contact your Bito workspace admin at muhammad.furqan@bito.ai.
Documentation & Help
| // Performance optimizations for LosslessCut FFmpeg operations | ||
| // This file contains optimized versions of key functions to improve processing speed | ||
|
|
||
| // Optimization 1: Improved FFmpeg argument handling with better memory management | ||
| export function optimizeFFmpegArgs(baseArgs: string[]): string[] { | ||
| const optimizedArgs = [ | ||
| ...baseArgs, | ||
| // Enable multi-threading for better CPU utilization | ||
| "-threads", | ||
| "0", // Use all available CPU cores | ||
| // Optimize I/O operations | ||
| "-fflags", | ||
| "+discardcorrupt+genpts", | ||
| // Reduce memory usage and improve processing speed | ||
| "-avioflags", | ||
| "direct", | ||
| // Fast seeking optimizations | ||
| "-ss_after_input", | ||
| "1", | ||
| // Reduce overhead | ||
| "-copytb", | ||
| "1", | ||
| ]; | ||
|
|
||
| return optimizedArgs; | ||
| } | ||
|
|
||
| // Optimization 2: Improved progress handling with better performance (simplified) | ||
| export function optimizedHandleProgress( | ||
| process: { stderr: any }, | ||
| duration: number | undefined, | ||
| onProgress: (progress: number) => void, | ||
| customMatcher?: (line: string) => void | ||
| ) { | ||
| if (!onProgress || !process.stderr) return; | ||
|
|
||
| onProgress(0); | ||
|
|
||
| // Note: This is a simplified version that would need proper stream handling | ||
| // in a real implementation with readline or similar stream processing | ||
| } | ||
|
|
||
| // Optimization 3: Batch processing optimization | ||
| export function createOptimizedBatchProcessor<T>( | ||
| items: T[], | ||
| processor: (item: T) => Promise<any>, | ||
| options: { | ||
| concurrency?: number; | ||
| batchSize?: number; | ||
| progressCallback?: (completed: number, total: number) => void; | ||
| } = {} | ||
| ) { | ||
| const { concurrency = 4, batchSize = 10, progressCallback } = options; | ||
|
|
||
| return async function processBatch() { | ||
| const results: any[] = []; | ||
| let completed = 0; | ||
|
|
||
| // Process in optimized batches | ||
| for (let i = 0; i < items.length; i += batchSize) { | ||
| const batch = items.slice(i, i + batchSize); | ||
|
|
||
| // Process batch items with controlled concurrency | ||
| const batchPromises = batch.map(async (item) => { | ||
| const result = await processor(item); | ||
| completed++; | ||
|
|
||
| if ( | ||
| progressCallback && | ||
| completed % Math.max(1, Math.floor(items.length / 100)) === 0 | ||
| ) { | ||
| progressCallback(completed, items.length); | ||
| } | ||
|
|
||
| return result; | ||
| }); | ||
|
|
||
| // Process with limited concurrency to avoid overwhelming the system | ||
| const batchResults = await Promise.all( | ||
| batchPromises.slice(0, concurrency) | ||
| ); | ||
| results.push(...batchResults); | ||
|
|
||
| // Process remaining items in the batch | ||
| if (batchPromises.length > concurrency) { | ||
| const remainingResults = await Promise.all( | ||
| batchPromises.slice(concurrency) | ||
| ); | ||
| results.push(...remainingResults); | ||
| } | ||
| } | ||
|
|
||
| return results; | ||
| }; | ||
| } | ||
|
|
||
| // Optimization 4: Memory-efficient stream processing (simplified) | ||
| export function createOptimizedStreamProcessor( | ||
| options: { | ||
| bufferSize?: number; | ||
| highWaterMark?: number; | ||
| } = {} | ||
| ) { | ||
| const { bufferSize = 64 * 1024, highWaterMark = 16 * 1024 } = options; | ||
|
|
||
| return { | ||
| execaOptions: { | ||
| buffer: false, | ||
| stdio: ["pipe", "pipe", "pipe"], | ||
| maxBuffer: bufferSize, | ||
| encoding: "buffer" as const, | ||
| // Optimize child process creation | ||
| windowsHide: true, | ||
| // Reduce memory overhead | ||
| cleanup: true, | ||
| all: false, | ||
| }, | ||
|
|
||
| streamOptions: { | ||
| highWaterMark, | ||
| objectMode: false, | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| // Optimization 5: Improved seeking performance | ||
| export function getOptimizedSeekArgs(from?: number, to?: number): string[] { | ||
| const args: string[] = []; | ||
|
|
||
| if (from != null) { | ||
| // Use precise seeking for better performance | ||
| args.push("-ss", from.toFixed(6)); | ||
| // Enable fast seeking when possible | ||
| if (from > 1) { | ||
| args.push("-accurate_seek"); | ||
| } | ||
| } | ||
|
|
||
| if (to != null && from != null) { | ||
| const duration = to - from; | ||
| args.push("-t", duration.toFixed(6)); | ||
| } | ||
|
|
||
| return args; | ||
| } | ||
|
|
||
| // Optimization 6: Codec-specific optimizations | ||
| export function getOptimizedCodecArgs( | ||
| codec: string, | ||
| quality: "fast" | "balanced" | "quality" = "balanced" | ||
| ): string[] { | ||
| const presets = { | ||
| libx264: { | ||
| fast: ["-preset", "ultrafast", "-tune", "zerolatency"], | ||
| balanced: ["-preset", "medium", "-crf", "23"], | ||
| quality: ["-preset", "slow", "-crf", "18"], | ||
| }, | ||
| libx265: { | ||
| fast: ["-preset", "ultrafast", "-x265-params", "log-level=error"], | ||
| balanced: ["-preset", "medium", "-crf", "28"], | ||
| quality: ["-preset", "slow", "-crf", "24"], | ||
| }, | ||
| copy: { | ||
| fast: ["-c", "copy"], | ||
| balanced: ["-c", "copy"], | ||
| quality: ["-c", "copy"], | ||
| }, | ||
| }; | ||
|
|
||
| return presets[codec as keyof typeof presets]?.[quality] || ["-c", "copy"]; | ||
| } | ||
|
|
||
| // Optimization 7: Smart quality detection | ||
| export function detectOptimalQuality( | ||
| _inputFile: string, | ||
| streams: any[] | ||
| ): "fast" | "balanced" | "quality" { | ||
| // Analyze file characteristics to determine optimal quality setting | ||
| const videoStream = streams.find((s) => s.codec_type === "video"); | ||
|
|
||
| if (!videoStream) return "fast"; | ||
|
|
||
| const resolution = (videoStream.width || 0) * (videoStream.height || 0); | ||
| const bitrate = parseInt(videoStream.bit_rate) || 0; | ||
|
|
||
| // HD+ content with high bitrate - use quality mode | ||
| if (resolution >= 1920 * 1080 && bitrate > 5000000) { | ||
| return "quality"; | ||
| } | ||
|
|
||
| // Standard definition or lower bitrate - use fast mode | ||
| if (resolution <= 720 * 480 || bitrate < 1000000) { | ||
| return "fast"; | ||
| } | ||
|
|
||
| // Default to balanced | ||
| return "balanced"; | ||
| } | ||
|
|
||
| // Optimization 8: Parallel processing for multiple segments | ||
| export function createParallelSegmentProcessor( | ||
| segments: any[], | ||
| options: { | ||
| maxConcurrency?: number; | ||
| resourceLimit?: number; | ||
| } = {} | ||
| ) { | ||
| const { maxConcurrency = 2, resourceLimit = 4 } = options; | ||
|
|
||
| return async function processSegments( | ||
| processor: (segment: any, index: number) => Promise<any> | ||
| ) { | ||
| const semaphore = new Array(Math.min(maxConcurrency, resourceLimit)).fill( | ||
| null | ||
| ); | ||
| let segmentIndex = 0; | ||
| const results: any[] = []; | ||
|
|
||
| const processNext = async () => { | ||
| if (segmentIndex >= segments.length) return; | ||
|
|
||
| const currentIndex = segmentIndex++; | ||
| const segment = segments[currentIndex]; | ||
|
|
||
| try { | ||
| const result = await processor(segment, currentIndex); | ||
| results[currentIndex] = result; | ||
| } catch (error) { | ||
| results[currentIndex] = { error }; | ||
| } | ||
|
|
||
| // Continue processing if there are more segments | ||
| if (segmentIndex < segments.length) { | ||
| await processNext(); | ||
| } | ||
| }; | ||
|
|
||
| // Start parallel processing | ||
| await Promise.all(semaphore.map(() => processNext())); | ||
|
|
||
| return results; | ||
| }; | ||
| } | ||
|
|
||
| export default { | ||
| optimizeFFmpegArgs, | ||
| optimizedHandleProgress, | ||
| createOptimizedBatchProcessor, | ||
| createOptimizedStreamProcessor, | ||
| getOptimizedSeekArgs, | ||
| getOptimizedCodecArgs, | ||
| detectOptimalQuality, | ||
| createParallelSegmentProcessor, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is identical to ffmpeg-optimizations.ts. If intended as a replacement, remove the old file to avoid duplication. Otherwise, consider merging or renaming.
The option "-ss_after_input" is not a valid FFmpeg option. This will cause FFmpeg commands to fail with an unrecognized option error. Remove this line to prevent runtime failures.
The concurrency limit is not properly implemented. All promises in the batch start executing immediately via the map(async), defeating the purpose of limiting concurrency. This may overwhelm the system with large batches. Refactor to limit concurrent executions.
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
| function handleProgress( | ||
| process: { stderr: Readable | null }, | ||
| duration: number | undefined, | ||
| onProgress: (a: number) => void, | ||
| customMatcher?: (a: string) => void, | ||
| customMatcher?: (a: string) => void | ||
| ) { | ||
| if (!onProgress) return; | ||
| if (process.stderr == null) return; | ||
| onProgress(0); | ||
|
|
||
| const rl = readline.createInterface({ input: process.stderr }); | ||
| rl.on('line', (line) => { | ||
| // Performance optimization: Create readline interface with optimized settings | ||
| const rl = readline.createInterface({ | ||
| input: process.stderr, | ||
| // Optimize for performance | ||
| crlfDelay: Infinity, | ||
| historySize: 0, // Disable history to save memory | ||
| }); | ||
|
|
||
| // Throttle progress updates to reduce UI overhead | ||
| let lastProgressTime = 0; | ||
| const progressThrottle = 50; // Update progress max every 50ms | ||
| let lastProgress = 0; | ||
|
|
||
| rl.on("line", (line) => { | ||
| // console.log('progress', line); | ||
|
|
||
| try { | ||
| const progress = parseFfmpegProgressLine({ line, customMatcher, duration }); | ||
| if (progress != null) { | ||
| const now = Date.now(); | ||
|
|
||
| // Skip processing if too frequent (performance optimization) | ||
| if (now - lastProgressTime < progressThrottle) return; | ||
|
|
||
| const progress = parseFfmpegProgressLine({ | ||
| line, | ||
| customMatcher, | ||
| duration, | ||
| }); | ||
| if (progress != null && Math.abs(progress - lastProgress) > 0.001) { | ||
| // Only update if progress changed significantly | ||
| onProgress(progress); | ||
| lastProgressTime = now; | ||
| lastProgress = progress; | ||
| } | ||
| } catch (err) { | ||
| logger.error('Failed to parse ffmpeg progress line:', err instanceof Error ? err.message : err); | ||
| // Reduce logging overhead - only log in debug mode | ||
| if (logger.level === "debug") { | ||
| logger.error( | ||
| "Failed to parse ffmpeg progress line:", | ||
| err instanceof Error ? err.message : err | ||
| ); | ||
| } | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The added throttling in handleProgress may skip progress updates if they occur more frequently than 50ms or change by less than 0.001, potentially making the progress bar less responsive. While this is a performance optimization, it alters observable behavior compared to the original unthrottled version.
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
| return execa(getFfmpegPath(), args, { | ||
| encoding: "buffer", | ||
| buffer: false, | ||
| stderr: enableLog ? "inherit" : "pipe", | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The execa options were changed from using getExecaOptions to hardcoded values, but this omits the env configuration that sets LD_LIBRARY_PATH on Linux, which could cause FFmpeg library loading failures. Restore the getExecaOptions call to ensure proper environment setup.
Code suggestion
Check the AI-generated fix before applying
--- typescript_codebase/src/main/ffmpeg.ts
+++ typescript_codebase/src/main/ffmpeg.ts
@@ -920,6 +920,6 @@
if (enableLog) logger.info(getFfCommandLine("ffmpeg", args));
- return execa(getFfmpegPath(), args, {
- encoding: "buffer",
- buffer: false,
- stderr: enableLog ? "inherit" : "pipe",
- });
+ return execa(getFfmpegPath(), args, getExecaOptions({ buffer: false, stderr: enableLog ? "inherit" : "pipe" }));
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
| if (segmentNeedsSmartCut && !detectedFps) throw new UserFacingError(i18n.t('Smart cut is not possible when FPS is unknown')); | ||
| console.log('Smart cut on video stream', videoStream.index); | ||
| if (segmentNeedsSmartCut && !detectedFps) | ||
| throw new Error("Smart cut is not possible when FPS is unknown"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error should be user-facing and internationalized, but it's thrown as a plain Error without i18n, unlike similar errors in the codebase.
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
| "-ss_after_input", | ||
| "1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no -ss_after_input option in FFmpeg; this appears to be a misunderstanding of output seeking syntax. FFmpeg will fail with an unrecognized option, potentially halting video operations.
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
| const batchResults = await Promise.all( | ||
| batchPromises.slice(0, concurrency) | ||
| ); | ||
| results.push(...batchResults); | ||
|
|
||
| // Process remaining items in the batch | ||
| if (batchPromises.length > concurrency) { | ||
| const remainingResults = await Promise.all( | ||
| batchPromises.slice(concurrency) | ||
| ); | ||
| results.push(...remainingResults); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation creates all batchPromises at once, causing them to execute concurrently without limit, defeating the concurrency control. This may overwhelm system resources during large batch processing.
Code Review Run #5ff570
Should Bito avoid suggestions like this for future reviews? (Manage Rules)
- Yes, avoid them
Summary by Bito