
Recherche avancée
Autres articles (59)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (14260)
-
AWS lambda SAM deploy error - Template format error : Unresolved resource dependencies
1er juin 2022, par mozengeI have am trying to deploy an aws lambda function using the SAM cli. I have some layers defined in the sam template. Testing locally using
sam local start-api
works quite well. The but deploying using thesam deploy --guided
command throws the following error
Error: Failed to create changeset for the stack: sam-app, ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Template format error: Unresolved resource dependencies [arn:aws:lambda:us-west-1:338231645678:layer:ffmpeg:1] in the Resources block of the template


The SAM template is as follows


AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
 video-processor-functions

 Functions to generate gif and thumbnail from uploaded videos
 
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
 Function:
 Timeout: 3
 Tracing: Active

Resources:
 VideoProcessorFunctions:
 Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
 Properties:
 CodeUri: src/
 Handler: app.lambdaHandler
 Runtime: nodejs14.x
 # timeout in seconds - 2 minutes
 Timeout: 120
 Layers:
 - !Ref VideoProcessorDepLayer
 - !Ref arn:aws:lambda:us-west-1:338231645678:layer:ffmpeg:1
 Architectures:
 - x86_64
 Events:
 HelloWorld:
 Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
 Properties:
 Path: /hello
 Method: get

 VideoProcessorDepLayer:
 Type: AWS::Serverless::LayerVersion
 Properties:
 LayerName: mh-video-processor-dependencies
 Description: Dependencies for sam app [video-processor-functions]
 ContentUri: dependencies/
 CompatibleRuntimes:
 - nodejs14.17
 LicenseInfo: 'MIT'
 RetentionPolicy: Retain

Outputs:
 # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
 # Find out more about other implicit resources you can reference within SAM
 # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
 HelloWorldApi:
 Description: "API Gateway endpoint URL for Prod stage for Hello World function"
 Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
 VideoProcessorFunctions:
 Description: "Generate GIF and Thumnail from Video"
 Value: !GetAtt VideoProcessorFunctions.Arn
 VideoProcessorFunctionsIamRole:
 Description: "Implicit IAM Role created for MH Video Processor function"
 Value: !GetAtt VideoProcessorFunctionsRole.Arn




Any ideas what i'm doing wrong ?


-
add libaribb24 ARIB STD-B24 caption decoder
14 janvier 2019, par Jan Ekströmadd libaribb24 ARIB STD-B24 caption decoder
* Outputs ASS lines with basic coloring and font scaling for each
given region.
* Sets the default style to the resolution of the subtitle plane
(for example, 960x540 / 36pt font for profile A).
* Has options to :
* Disable ruby text (which is coded as regions which have
half-height text in libaribb24).
Enabled by default as without positioning ruby text only
confuses as it is usually coded in the beginning of the decoded
subtitle line.
* Set the working directory, in which libaribb24 will read
configuration as well as into which it may save broadcast extra
symbols as PNG.
Unset by default.The unconventional library check can be explained by the library's
current master branch being licensed as LGPLv3, but at the time of
writing the latest official release is still licensed under GPLv3.Thus, one either has to wait for the following release, or enable
GPLv3. -
My stitched frames colors looks very different from my original video, causing my video to not be able to stitch it back properly [closed]
27 mai 2024, par Wer WerI am trying to extract some frames off my video to do some form of steganography. I accidentally used a 120fps video, causing the files to be too big when i extract every single frame. To fix this, I decided to calculate how many frames is needed to hide the bits (LSB replacement for every 8 bit) and then extract only certain amount of frames. This means


- 

- if i only need 1 frame, ill extract frame0.png
- ill remove frame0 from the original video
- encode my data into frame0.png
- stitch frame0 back into ffv1 video
- concatenate frame0 video to the rest of the video, frame0 video in front.












I can do extraction and remove frame0 from the video. However, when looking at frame0.mkv and the original.mkv, i realised the colors seemed to be different.
Frame0.mkv
original.mkv


This causes a glitch during the stitching of videos together, where the end of the video has some corrupted pixels. Not only that, it stops the video at where frame0 ends. I think those corrupted pixels were supposed to be original.mkv pixels, but they did not concatenate properly.
results.mkv


I use an ffmpeg sub command to extract frames and stitch them


def split_into_frames(self, ffv1_video, hidden_text_length):
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 ffv1_video_path = os.path.join(self.here, ffv1_video)
 ffv1_video = cv2.VideoCapture(ffv1_video_path)

 currentframe = 0
 total_frame_bits = 0
 frames_to_remove = []

 while True:
 ret, frame = ffv1_video.read()
 if ret:
 name = os.path.join(self.here, "data", f"frame{currentframe}.png")
 print("Creating..." + name)
 cv2.imwrite(name, frame)

 current_frame_path = os.path.join(
 self.here, "data", f"frame{currentframe}.png"
 )

 if os.path.exists(current_frame_path):
 binary_data = self.read_frame_binary(current_frame_path)

 if (total_frame_bits // 8) >= hidden_text_length:
 print("Complete")
 break
 total_frame_bits += len(binary_data)
 frames_to_remove.append(currentframe)
 currentframe += 1
 else:
 print("Complete")
 break

 ffv1_video.release()

 # Remove the extracted frames from the original video
 self.remove_frames_from_video(ffv1_video_path, frames_to_remove)




This code splits the video into the required number of frames. It checks if the total amount of frame bits is enough to encode the hidden text


def remove_frames_from_video(self, input_video, frames_to_remove):
 if not input_video.endswith(".mkv"):
 input_video += ".mkv"

 input_video_path = os.path.join(self.here, input_video)

 # Create a filter string to exclude specific frames
 filter_str = (
 "select='not("
 + "+".join([f"eq(n\,{frame})" for frame in frames_to_remove])
 + ")',setpts=N/FRAME_RATE/TB"
 )

 # Temporary output video path
 output_video_path = os.path.join(self.here, "temp_output.mkv")

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_video_path,
 "-vf",
 filter_str,
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-an", # Remove audio
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Frames removed. Temporary video created at {output_video_path}")

 # Replace the original video with the new video
 os.replace(output_video_path, input_video_path)
 print(f"Original video replaced with updated video at {input_video_path}")

 # Re-add the trimmed audio to the new video
 self.trim_audio_and_add_to_video(input_video_path, frames_to_remove)
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 if os.path.exists(output_video_path):
 os.remove(output_video_path)

def trim_audio_and_add_to_video(self, video_path, frames_to_remove):
 # Calculate the new duration based on the remaining frames
 fps = 60 # Assuming the framerate is 60 fps
 total_frames_removed = len(frames_to_remove)
 original_duration = self.get_video_duration(video_path)
 new_duration = original_duration - (total_frames_removed / fps)

 # Extract and trim the audio
 audio_path = os.path.join(self.here, "trimmed_audio.aac")
 command_extract_trim = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-t",
 str(new_duration),
 "-q:a",
 "0",
 "-map",
 "a",
 audio_path,
 ]
 try:
 subprocess.run(command_extract_trim, check=True)
 print(f"Audio successfully trimmed and extracted to {audio_path}")

 # Add the trimmed audio back to the video
 final_video_path = video_path.replace(".mkv", "_final.mkv")
 command_add_audio = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_video_path,
 ]
 subprocess.run(command_add_audio, check=True)
 print(f"Final video with trimmed audio created at {final_video_path}")

 # Replace the original video with the final video
 os.replace(final_video_path, video_path)
 print(f"Original video replaced with final video at {video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

def get_video_duration(self, video_path):
 command = [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "format=duration",
 "-of",
 "default=noprint_wrappers=1:nokey=1",
 video_path,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 duration = float(result.stdout.decode().strip())
 return duration
 except subprocess.CalledProcessError as e:
 print(f"An error occurred while getting video duration: {e}")
 return 0.0



here ill remove all the frames that has been extracted from the video


def stitch_frames_to_video(self, ffv1_video, framerate=60):
 # this command is another ffmpeg subcommand.
 # it takes every single frame from data1 directory and stitch it back into a ffv1 video
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 output_video_path = os.path.join(self.here, ffv1_video)

 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(self.frames_directory, "frame%d.png"),
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")



after encoding the frames, ill try to stitch the frames back into ffv1 video


def concatenate_videos(self, video1_path, video2_path, output_path):
 if not video1_path.endswith(".mkv"):
 video1_path += ".mkv"
 if not video2_path.endswith(".mkv"):
 video2_path += ".mkv"
 if not output_path.endswith(".mkv"):
 output_path += ".mkv"

 video1_path = os.path.join(self.here, video1_path)
 video2_path = os.path.join(self.here, video2_path)
 output_video_path = os.path.join(self.here, output_path)

 # Create a text file with the paths of the videos to concatenate
 concat_list_path = os.path.join(self.here, "concat_list.txt")
 with open(concat_list_path, "w") as f:
 f.write(f"file '{video1_path}'\n")
 f.write(f"file '{video2_path}'\n")

 command = [
 "ffmpeg",
 "-y",
 "-f",
 "concat",
 "-safe",
 "0",
 "-i",
 concat_list_path,
 "-c",
 "copy",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Videos successfully concatenated into {output_video_path}")
 os.remove(concat_list_path) # Clean up the temporary file
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")



now i try to concatenate the frames video with the original video, but it is corrupting as the colors are different.


this code does the other processing by removing all the extracted frames from the video, as well as trimming the audio (but i think ill be removing the audio trimming as i realised it is not needed at all)


I think its because .png frames will lose colors when they get extracted out. The only work around I know is to extract every single frame. But this causes the program to run too long as for a 12 second video, I will extract 700++ frames. Is there a way to fix this ?


my full code


import json
import os
import shutil
import magic
import ffmpeg
import cv2
import numpy as np
import subprocess
from PIL import Image
import glob


import json
import os
import shutil
import magic
import ffmpeg
import cv2
import numpy as np
import subprocess
from PIL import Image
import glob


class FFV1Steganography:
 def __init__(self):
 self.here = os.path.dirname(os.path.abspath(__file__))

 # Create a folder to save the frames
 self.frames_directory = os.path.join(self.here, "data")
 try:
 if not os.path.exists(self.frames_directory):
 os.makedirs(self.frames_directory)
 except OSError:
 print("Error: Creating directory of data")

 def read_hidden_text(self, filename):
 file_path_txt = os.path.join(self.here, filename)
 # Read the content of the file in binary mode
 with open(file_path_txt, "rb") as f:
 hidden_text_content = f.read()
 return hidden_text_content

 def calculate_length_of_hidden_text(self, filename):
 hidden_text_content = self.read_hidden_text(filename)
 # Convert each byte to its binary representation and join them
 return len("".join(format(byte, "08b") for byte in hidden_text_content))

 def find_raw_video_file(self, filename):
 file_extensions = [".mp4", ".mkv", ".avi"]
 for ext in file_extensions:
 file_path = os.path.join(self.here, filename + ext)
 if os.path.isfile(file_path):
 return file_path
 return None

 def convert_video(self, input_file, ffv1_video):
 # this function is the same as running this command line
 # ffmpeg -i video.mp4 -t 12 -c:v ffv1 -level 3 -coder 1 -context 1 -g 1 -slices 4 -slicecrc 1 -c:a copy output.mkv

 # in order to run any ffmpeg subprocess, you have to have ffmpeg installed into the computer.
 # https://ffmpeg.org/download.html

 # WARNING:
 # the ffmpeg you should download is not the same as the ffmpeg library for python.
 # you need to download the exe from the link above, then add ffmpeg bin directory to system variables
 output_file = os.path.join(self.here, ffv1_video)

 if not output_file.endswith(".mkv"):
 output_file += ".mkv"

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_file,
 "-t",
 "12",
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-c:a",
 "copy",
 output_file,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Conversion successful: {output_file}")
 return output_file
 except subprocess.CalledProcessError as e:
 print(f"Error during conversion: {e}")

 def extract_audio(self, ffv1_video, audio_path):
 # Ensure the audio output file has the correct extension
 if not audio_path.endswith(".aac"):
 audio_path += ".aac"

 # Full path to the extracted audio file
 extracted_audio = os.path.join(self.here, audio_path)

 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 command = [
 "ffmpeg",
 "-i",
 ffv1_video,
 "-q:a",
 "0",
 "-map",
 "a",
 extracted_audio,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 print(f"Audio successfully extracted to {extracted_audio}")
 print(result.stdout.decode())
 print(result.stderr.decode())
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 print(e.stdout.decode())
 print(e.stderr.decode())

 def read_frame_binary(self, frame_path):
 # Open the image and convert to binary
 with open(frame_path, "rb") as f:
 binary_content = f.read()
 binary_string = "".join(format(byte, "08b") for byte in binary_content)
 return binary_string

 def remove_frames_from_video(self, input_video, frames_to_remove):
 if not input_video.endswith(".mkv"):
 input_video += ".mkv"

 input_video_path = os.path.join(self.here, input_video)

 # Create a filter string to exclude specific frames
 filter_str = (
 "select='not("
 + "+".join([f"eq(n\,{frame})" for frame in frames_to_remove])
 + ")',setpts=N/FRAME_RATE/TB"
 )

 # Temporary output video path
 output_video_path = os.path.join(self.here, "temp_output.mkv")

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_video_path,
 "-vf",
 filter_str,
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-an", # Remove audio
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Frames removed. Temporary video created at {output_video_path}")

 # Replace the original video with the new video
 os.replace(output_video_path, input_video_path)
 print(f"Original video replaced with updated video at {input_video_path}")

 # Re-add the trimmed audio to the new video
 self.trim_audio_and_add_to_video(input_video_path, frames_to_remove)
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 if os.path.exists(output_video_path):
 os.remove(output_video_path)

 def trim_audio_and_add_to_video(self, video_path, frames_to_remove):
 # Calculate the new duration based on the remaining frames
 fps = 60 # Assuming the framerate is 60 fps
 total_frames_removed = len(frames_to_remove)
 original_duration = self.get_video_duration(video_path)
 new_duration = original_duration - (total_frames_removed / fps)

 # Extract and trim the audio
 audio_path = os.path.join(self.here, "trimmed_audio.aac")
 command_extract_trim = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-t",
 str(new_duration),
 "-q:a",
 "0",
 "-map",
 "a",
 audio_path,
 ]
 try:
 subprocess.run(command_extract_trim, check=True)
 print(f"Audio successfully trimmed and extracted to {audio_path}")

 # Add the trimmed audio back to the video
 final_video_path = video_path.replace(".mkv", "_final.mkv")
 command_add_audio = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_video_path,
 ]
 subprocess.run(command_add_audio, check=True)
 print(f"Final video with trimmed audio created at {final_video_path}")

 # Replace the original video with the final video
 os.replace(final_video_path, video_path)
 print(f"Original video replaced with final video at {video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def get_video_duration(self, video_path):
 command = [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "format=duration",
 "-of",
 "default=noprint_wrappers=1:nokey=1",
 video_path,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 duration = float(result.stdout.decode().strip())
 return duration
 except subprocess.CalledProcessError as e:
 print(f"An error occurred while getting video duration: {e}")
 return 0.0

 def split_into_frames(self, ffv1_video, hidden_text_length):
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 ffv1_video_path = os.path.join(self.here, ffv1_video)
 ffv1_video = cv2.VideoCapture(ffv1_video_path)

 currentframe = 0
 total_frame_bits = 0
 frames_to_remove = []

 while True:
 ret, frame = ffv1_video.read()
 if ret:
 name = os.path.join(self.here, "data", f"frame{currentframe}.png")
 print("Creating..." + name)
 cv2.imwrite(name, frame)

 current_frame_path = os.path.join(
 self.here, "data", f"frame{currentframe}.png"
 )

 if os.path.exists(current_frame_path):
 binary_data = self.read_frame_binary(current_frame_path)

 if (total_frame_bits // 8) >= hidden_text_length:
 print("Complete")
 break
 total_frame_bits += len(binary_data)
 frames_to_remove.append(currentframe)
 currentframe += 1
 else:
 print("Complete")
 break

 ffv1_video.release()

 # Remove the extracted frames from the original video
 self.remove_frames_from_video(ffv1_video_path, frames_to_remove)

 def stitch_frames_to_video(self, ffv1_video, framerate=60):
 # this command is another ffmpeg subcommand.
 # it takes every single frame from data1 directory and stitch it back into a ffv1 video
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 output_video_path = os.path.join(self.here, ffv1_video)

 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(self.frames_directory, "frame%d.png"),
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def add_audio_to_video(self, encoded_video, audio_path, final_video):
 # the audio will be lost during splitting and restitching.
 # that is why previously we separated the audio from video and saved it as aac.
 # now, we can put the audio back into the video, again using ffmpeg subcommand.

 if not encoded_video.endswith(".mkv"):
 encoded_video += ".mkv"

 if not final_video.endswith(".mkv"):
 final_video += ".mkv"

 if not audio_path.endswith(".aac"):
 audio_path += ".aac"

 final_output_path = os.path.join(self.here, final_video)

 command = [
 "ffmpeg",
 "-y",
 "-i",
 os.path.join(self.here, encoded_video),
 "-i",
 os.path.join(self.here, audio_path),
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Final video with audio created at {final_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def concatenate_videos(self, video1_path, video2_path, output_path):
 if not video1_path.endswith(".mkv"):
 video1_path += ".mkv"
 if not video2_path.endswith(".mkv"):
 video2_path += ".mkv"
 if not output_path.endswith(".mkv"):
 output_path += ".mkv"

 video1_path = os.path.join(self.here, video1_path)
 video2_path = os.path.join(self.here, video2_path)
 output_video_path = os.path.join(self.here, output_path)

 # Create a text file with the paths of the videos to concatenate
 concat_list_path = os.path.join(self.here, "concat_list.txt")
 with open(concat_list_path, "w") as f:
 f.write(f"file '{video1_path}'\n")
 f.write(f"file '{video2_path}'\n")

 command = [
 "ffmpeg",
 "-y",
 "-f",
 "concat",
 "-safe",
 "0",
 "-i",
 concat_list_path,
 "-c",
 "copy",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Videos successfully concatenated into {output_video_path}")
 os.remove(concat_list_path) # Clean up the temporary file
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def cleanup(self, files_to_delete):
 # Delete specified files
 for file in files_to_delete:
 file_path = os.path.join(self.here, file)
 if os.path.exists(file_path):
 os.remove(file_path)
 print(f"Deleted file: {file_path}")
 else:
 print(f"File not found: {file_path}")

 # Delete the frames directory and its contents
 if os.path.exists(self.frames_directory):
 shutil.rmtree(self.frames_directory)
 print(f"Deleted directory and its contents: {self.frames_directory}")
 else:
 print(f"Directory not found: {self.frames_directory}")


if __name__ == "__main__":
 stego = FFV1Steganography()

 # original video (mp4,mkv,avi)
 original_video = "video"
 # converted ffv1 video
 ffv1_video = "output"
 # extracted audio
 extracted_audio = "audio"
 # encoded video without sound
 encoded_video = "encoded"
 # final result video, encoded, with sound
 final_video = "result"

 # region --hidden text processing --
 hidden_text = stego.read_hidden_text("hiddentext.txt")
 hidden_text_length = stego.calculate_length_of_hidden_text("hiddentext.txt")
 # endregion

 # region -- raw video locating --
 raw_video_file = stego.find_raw_video_file(original_video)
 if raw_video_file:
 print(f"Found video file: {raw_video_file}")
 else:
 print("video.mp4 not found.")
 # endregion

 # region -- video processing INPUT--
 # converted_video_file = stego.convert_video(raw_video_file, ffv1_video)
 # if converted_video_file and os.path.exists(converted_video_file):
 # stego.extract_audio(converted_video_file, extracted_audio)
 # else:
 # print(f"Conversion failed: {converted_video_file} not found.")

 # stego.split_into_frames(ffv1_video, hidden_text_length * 50000)
 # endregion

 # region -- video processing RESULT --
 # stego.stitch_frames_to_video(encoded_video)
 stego.concatenate_videos(encoded_video, ffv1_video, final_video)
 # stego.add_audio_to_video(final_video, extracted_audio, final_video)
 # endregion

 # region -- cleanup --
 files_to_delete = [
 extracted_audio + ".aac",
 encoded_video + ".mkv",
 ffv1_video + ".mkv",
 ]

 stego.cleanup(files_to_delete)
 # endregion








Edit for results expectations :
I dont know if there is a way to match the exact color encoding between the stitched png frames and the rest of the ffv1 video. Is there a way I can extract the frames such that the color, encoding or anything I may not know about ffv1 match the original ffv1 video ?