
Recherche avancée
Autres articles (96)
-
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (11887)
-
My stitched frames colors looks very different from my original video, causing my video to not be able to stitch it back properly [closed]
27 mai 2024, par Wer WerI am trying to extract some frames off my video to do some form of steganography. I accidentally used a 120fps video, causing the files to be too big when i extract every single frame. To fix this, I decided to calculate how many frames is needed to hide the bits (LSB replacement for every 8 bit) and then extract only certain amount of frames. This means


- 

- if i only need 1 frame, ill extract frame0.png
- ill remove frame0 from the original video
- encode my data into frame0.png
- stitch frame0 back into ffv1 video
- concatenate frame0 video to the rest of the video, frame0 video in front.












I can do extraction and remove frame0 from the video. However, when looking at frame0.mkv and the original.mkv, i realised the colors seemed to be different.
Frame0.mkv
original.mkv


This causes a glitch during the stitching of videos together, where the end of the video has some corrupted pixels. Not only that, it stops the video at where frame0 ends. I think those corrupted pixels were supposed to be original.mkv pixels, but they did not concatenate properly.
results.mkv


I use an ffmpeg sub command to extract frames and stitch them


def split_into_frames(self, ffv1_video, hidden_text_length):
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 ffv1_video_path = os.path.join(self.here, ffv1_video)
 ffv1_video = cv2.VideoCapture(ffv1_video_path)

 currentframe = 0
 total_frame_bits = 0
 frames_to_remove = []

 while True:
 ret, frame = ffv1_video.read()
 if ret:
 name = os.path.join(self.here, "data", f"frame{currentframe}.png")
 print("Creating..." + name)
 cv2.imwrite(name, frame)

 current_frame_path = os.path.join(
 self.here, "data", f"frame{currentframe}.png"
 )

 if os.path.exists(current_frame_path):
 binary_data = self.read_frame_binary(current_frame_path)

 if (total_frame_bits // 8) >= hidden_text_length:
 print("Complete")
 break
 total_frame_bits += len(binary_data)
 frames_to_remove.append(currentframe)
 currentframe += 1
 else:
 print("Complete")
 break

 ffv1_video.release()

 # Remove the extracted frames from the original video
 self.remove_frames_from_video(ffv1_video_path, frames_to_remove)




This code splits the video into the required number of frames. It checks if the total amount of frame bits is enough to encode the hidden text


def remove_frames_from_video(self, input_video, frames_to_remove):
 if not input_video.endswith(".mkv"):
 input_video += ".mkv"

 input_video_path = os.path.join(self.here, input_video)

 # Create a filter string to exclude specific frames
 filter_str = (
 "select='not("
 + "+".join([f"eq(n\,{frame})" for frame in frames_to_remove])
 + ")',setpts=N/FRAME_RATE/TB"
 )

 # Temporary output video path
 output_video_path = os.path.join(self.here, "temp_output.mkv")

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_video_path,
 "-vf",
 filter_str,
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-an", # Remove audio
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Frames removed. Temporary video created at {output_video_path}")

 # Replace the original video with the new video
 os.replace(output_video_path, input_video_path)
 print(f"Original video replaced with updated video at {input_video_path}")

 # Re-add the trimmed audio to the new video
 self.trim_audio_and_add_to_video(input_video_path, frames_to_remove)
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 if os.path.exists(output_video_path):
 os.remove(output_video_path)

def trim_audio_and_add_to_video(self, video_path, frames_to_remove):
 # Calculate the new duration based on the remaining frames
 fps = 60 # Assuming the framerate is 60 fps
 total_frames_removed = len(frames_to_remove)
 original_duration = self.get_video_duration(video_path)
 new_duration = original_duration - (total_frames_removed / fps)

 # Extract and trim the audio
 audio_path = os.path.join(self.here, "trimmed_audio.aac")
 command_extract_trim = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-t",
 str(new_duration),
 "-q:a",
 "0",
 "-map",
 "a",
 audio_path,
 ]
 try:
 subprocess.run(command_extract_trim, check=True)
 print(f"Audio successfully trimmed and extracted to {audio_path}")

 # Add the trimmed audio back to the video
 final_video_path = video_path.replace(".mkv", "_final.mkv")
 command_add_audio = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_video_path,
 ]
 subprocess.run(command_add_audio, check=True)
 print(f"Final video with trimmed audio created at {final_video_path}")

 # Replace the original video with the final video
 os.replace(final_video_path, video_path)
 print(f"Original video replaced with final video at {video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

def get_video_duration(self, video_path):
 command = [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "format=duration",
 "-of",
 "default=noprint_wrappers=1:nokey=1",
 video_path,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 duration = float(result.stdout.decode().strip())
 return duration
 except subprocess.CalledProcessError as e:
 print(f"An error occurred while getting video duration: {e}")
 return 0.0



here ill remove all the frames that has been extracted from the video


def stitch_frames_to_video(self, ffv1_video, framerate=60):
 # this command is another ffmpeg subcommand.
 # it takes every single frame from data1 directory and stitch it back into a ffv1 video
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 output_video_path = os.path.join(self.here, ffv1_video)

 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(self.frames_directory, "frame%d.png"),
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")



after encoding the frames, ill try to stitch the frames back into ffv1 video


def concatenate_videos(self, video1_path, video2_path, output_path):
 if not video1_path.endswith(".mkv"):
 video1_path += ".mkv"
 if not video2_path.endswith(".mkv"):
 video2_path += ".mkv"
 if not output_path.endswith(".mkv"):
 output_path += ".mkv"

 video1_path = os.path.join(self.here, video1_path)
 video2_path = os.path.join(self.here, video2_path)
 output_video_path = os.path.join(self.here, output_path)

 # Create a text file with the paths of the videos to concatenate
 concat_list_path = os.path.join(self.here, "concat_list.txt")
 with open(concat_list_path, "w") as f:
 f.write(f"file '{video1_path}'\n")
 f.write(f"file '{video2_path}'\n")

 command = [
 "ffmpeg",
 "-y",
 "-f",
 "concat",
 "-safe",
 "0",
 "-i",
 concat_list_path,
 "-c",
 "copy",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Videos successfully concatenated into {output_video_path}")
 os.remove(concat_list_path) # Clean up the temporary file
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")



now i try to concatenate the frames video with the original video, but it is corrupting as the colors are different.


this code does the other processing by removing all the extracted frames from the video, as well as trimming the audio (but i think ill be removing the audio trimming as i realised it is not needed at all)


I think its because .png frames will lose colors when they get extracted out. The only work around I know is to extract every single frame. But this causes the program to run too long as for a 12 second video, I will extract 700++ frames. Is there a way to fix this ?


my full code


import json
import os
import shutil
import magic
import ffmpeg
import cv2
import numpy as np
import subprocess
from PIL import Image
import glob


import json
import os
import shutil
import magic
import ffmpeg
import cv2
import numpy as np
import subprocess
from PIL import Image
import glob


class FFV1Steganography:
 def __init__(self):
 self.here = os.path.dirname(os.path.abspath(__file__))

 # Create a folder to save the frames
 self.frames_directory = os.path.join(self.here, "data")
 try:
 if not os.path.exists(self.frames_directory):
 os.makedirs(self.frames_directory)
 except OSError:
 print("Error: Creating directory of data")

 def read_hidden_text(self, filename):
 file_path_txt = os.path.join(self.here, filename)
 # Read the content of the file in binary mode
 with open(file_path_txt, "rb") as f:
 hidden_text_content = f.read()
 return hidden_text_content

 def calculate_length_of_hidden_text(self, filename):
 hidden_text_content = self.read_hidden_text(filename)
 # Convert each byte to its binary representation and join them
 return len("".join(format(byte, "08b") for byte in hidden_text_content))

 def find_raw_video_file(self, filename):
 file_extensions = [".mp4", ".mkv", ".avi"]
 for ext in file_extensions:
 file_path = os.path.join(self.here, filename + ext)
 if os.path.isfile(file_path):
 return file_path
 return None

 def convert_video(self, input_file, ffv1_video):
 # this function is the same as running this command line
 # ffmpeg -i video.mp4 -t 12 -c:v ffv1 -level 3 -coder 1 -context 1 -g 1 -slices 4 -slicecrc 1 -c:a copy output.mkv

 # in order to run any ffmpeg subprocess, you have to have ffmpeg installed into the computer.
 # https://ffmpeg.org/download.html

 # WARNING:
 # the ffmpeg you should download is not the same as the ffmpeg library for python.
 # you need to download the exe from the link above, then add ffmpeg bin directory to system variables
 output_file = os.path.join(self.here, ffv1_video)

 if not output_file.endswith(".mkv"):
 output_file += ".mkv"

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_file,
 "-t",
 "12",
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-c:a",
 "copy",
 output_file,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Conversion successful: {output_file}")
 return output_file
 except subprocess.CalledProcessError as e:
 print(f"Error during conversion: {e}")

 def extract_audio(self, ffv1_video, audio_path):
 # Ensure the audio output file has the correct extension
 if not audio_path.endswith(".aac"):
 audio_path += ".aac"

 # Full path to the extracted audio file
 extracted_audio = os.path.join(self.here, audio_path)

 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 command = [
 "ffmpeg",
 "-i",
 ffv1_video,
 "-q:a",
 "0",
 "-map",
 "a",
 extracted_audio,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 print(f"Audio successfully extracted to {extracted_audio}")
 print(result.stdout.decode())
 print(result.stderr.decode())
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 print(e.stdout.decode())
 print(e.stderr.decode())

 def read_frame_binary(self, frame_path):
 # Open the image and convert to binary
 with open(frame_path, "rb") as f:
 binary_content = f.read()
 binary_string = "".join(format(byte, "08b") for byte in binary_content)
 return binary_string

 def remove_frames_from_video(self, input_video, frames_to_remove):
 if not input_video.endswith(".mkv"):
 input_video += ".mkv"

 input_video_path = os.path.join(self.here, input_video)

 # Create a filter string to exclude specific frames
 filter_str = (
 "select='not("
 + "+".join([f"eq(n\,{frame})" for frame in frames_to_remove])
 + ")',setpts=N/FRAME_RATE/TB"
 )

 # Temporary output video path
 output_video_path = os.path.join(self.here, "temp_output.mkv")

 command = [
 "ffmpeg",
 "-y",
 "-i",
 input_video_path,
 "-vf",
 filter_str,
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 "-an", # Remove audio
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Frames removed. Temporary video created at {output_video_path}")

 # Replace the original video with the new video
 os.replace(output_video_path, input_video_path)
 print(f"Original video replaced with updated video at {input_video_path}")

 # Re-add the trimmed audio to the new video
 self.trim_audio_and_add_to_video(input_video_path, frames_to_remove)
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")
 if os.path.exists(output_video_path):
 os.remove(output_video_path)

 def trim_audio_and_add_to_video(self, video_path, frames_to_remove):
 # Calculate the new duration based on the remaining frames
 fps = 60 # Assuming the framerate is 60 fps
 total_frames_removed = len(frames_to_remove)
 original_duration = self.get_video_duration(video_path)
 new_duration = original_duration - (total_frames_removed / fps)

 # Extract and trim the audio
 audio_path = os.path.join(self.here, "trimmed_audio.aac")
 command_extract_trim = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-t",
 str(new_duration),
 "-q:a",
 "0",
 "-map",
 "a",
 audio_path,
 ]
 try:
 subprocess.run(command_extract_trim, check=True)
 print(f"Audio successfully trimmed and extracted to {audio_path}")

 # Add the trimmed audio back to the video
 final_video_path = video_path.replace(".mkv", "_final.mkv")
 command_add_audio = [
 "ffmpeg",
 "-y",
 "-i",
 video_path,
 "-i",
 audio_path,
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_video_path,
 ]
 subprocess.run(command_add_audio, check=True)
 print(f"Final video with trimmed audio created at {final_video_path}")

 # Replace the original video with the final video
 os.replace(final_video_path, video_path)
 print(f"Original video replaced with final video at {video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def get_video_duration(self, video_path):
 command = [
 "ffprobe",
 "-v",
 "error",
 "-show_entries",
 "format=duration",
 "-of",
 "default=noprint_wrappers=1:nokey=1",
 video_path,
 ]
 try:
 result = subprocess.run(
 command, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE
 )
 duration = float(result.stdout.decode().strip())
 return duration
 except subprocess.CalledProcessError as e:
 print(f"An error occurred while getting video duration: {e}")
 return 0.0

 def split_into_frames(self, ffv1_video, hidden_text_length):
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 ffv1_video_path = os.path.join(self.here, ffv1_video)
 ffv1_video = cv2.VideoCapture(ffv1_video_path)

 currentframe = 0
 total_frame_bits = 0
 frames_to_remove = []

 while True:
 ret, frame = ffv1_video.read()
 if ret:
 name = os.path.join(self.here, "data", f"frame{currentframe}.png")
 print("Creating..." + name)
 cv2.imwrite(name, frame)

 current_frame_path = os.path.join(
 self.here, "data", f"frame{currentframe}.png"
 )

 if os.path.exists(current_frame_path):
 binary_data = self.read_frame_binary(current_frame_path)

 if (total_frame_bits // 8) >= hidden_text_length:
 print("Complete")
 break
 total_frame_bits += len(binary_data)
 frames_to_remove.append(currentframe)
 currentframe += 1
 else:
 print("Complete")
 break

 ffv1_video.release()

 # Remove the extracted frames from the original video
 self.remove_frames_from_video(ffv1_video_path, frames_to_remove)

 def stitch_frames_to_video(self, ffv1_video, framerate=60):
 # this command is another ffmpeg subcommand.
 # it takes every single frame from data1 directory and stitch it back into a ffv1 video
 if not ffv1_video.endswith(".mkv"):
 ffv1_video += ".mkv"

 output_video_path = os.path.join(self.here, ffv1_video)

 command = [
 "ffmpeg",
 "-y",
 "-framerate",
 str(framerate),
 "-i",
 os.path.join(self.frames_directory, "frame%d.png"),
 "-c:v",
 "ffv1",
 "-level",
 "3",
 "-coder",
 "1",
 "-context",
 "1",
 "-g",
 "1",
 "-slices",
 "4",
 "-slicecrc",
 "1",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Video successfully created at {output_video_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def add_audio_to_video(self, encoded_video, audio_path, final_video):
 # the audio will be lost during splitting and restitching.
 # that is why previously we separated the audio from video and saved it as aac.
 # now, we can put the audio back into the video, again using ffmpeg subcommand.

 if not encoded_video.endswith(".mkv"):
 encoded_video += ".mkv"

 if not final_video.endswith(".mkv"):
 final_video += ".mkv"

 if not audio_path.endswith(".aac"):
 audio_path += ".aac"

 final_output_path = os.path.join(self.here, final_video)

 command = [
 "ffmpeg",
 "-y",
 "-i",
 os.path.join(self.here, encoded_video),
 "-i",
 os.path.join(self.here, audio_path),
 "-c:v",
 "copy",
 "-c:a",
 "aac",
 "-strict",
 "experimental",
 final_output_path,
 ]
 try:
 subprocess.run(command, check=True)
 print(f"Final video with audio created at {final_output_path}")
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def concatenate_videos(self, video1_path, video2_path, output_path):
 if not video1_path.endswith(".mkv"):
 video1_path += ".mkv"
 if not video2_path.endswith(".mkv"):
 video2_path += ".mkv"
 if not output_path.endswith(".mkv"):
 output_path += ".mkv"

 video1_path = os.path.join(self.here, video1_path)
 video2_path = os.path.join(self.here, video2_path)
 output_video_path = os.path.join(self.here, output_path)

 # Create a text file with the paths of the videos to concatenate
 concat_list_path = os.path.join(self.here, "concat_list.txt")
 with open(concat_list_path, "w") as f:
 f.write(f"file '{video1_path}'\n")
 f.write(f"file '{video2_path}'\n")

 command = [
 "ffmpeg",
 "-y",
 "-f",
 "concat",
 "-safe",
 "0",
 "-i",
 concat_list_path,
 "-c",
 "copy",
 output_video_path,
 ]

 try:
 subprocess.run(command, check=True)
 print(f"Videos successfully concatenated into {output_video_path}")
 os.remove(concat_list_path) # Clean up the temporary file
 except subprocess.CalledProcessError as e:
 print(f"An error occurred: {e}")

 def cleanup(self, files_to_delete):
 # Delete specified files
 for file in files_to_delete:
 file_path = os.path.join(self.here, file)
 if os.path.exists(file_path):
 os.remove(file_path)
 print(f"Deleted file: {file_path}")
 else:
 print(f"File not found: {file_path}")

 # Delete the frames directory and its contents
 if os.path.exists(self.frames_directory):
 shutil.rmtree(self.frames_directory)
 print(f"Deleted directory and its contents: {self.frames_directory}")
 else:
 print(f"Directory not found: {self.frames_directory}")


if __name__ == "__main__":
 stego = FFV1Steganography()

 # original video (mp4,mkv,avi)
 original_video = "video"
 # converted ffv1 video
 ffv1_video = "output"
 # extracted audio
 extracted_audio = "audio"
 # encoded video without sound
 encoded_video = "encoded"
 # final result video, encoded, with sound
 final_video = "result"

 # region --hidden text processing --
 hidden_text = stego.read_hidden_text("hiddentext.txt")
 hidden_text_length = stego.calculate_length_of_hidden_text("hiddentext.txt")
 # endregion

 # region -- raw video locating --
 raw_video_file = stego.find_raw_video_file(original_video)
 if raw_video_file:
 print(f"Found video file: {raw_video_file}")
 else:
 print("video.mp4 not found.")
 # endregion

 # region -- video processing INPUT--
 # converted_video_file = stego.convert_video(raw_video_file, ffv1_video)
 # if converted_video_file and os.path.exists(converted_video_file):
 # stego.extract_audio(converted_video_file, extracted_audio)
 # else:
 # print(f"Conversion failed: {converted_video_file} not found.")

 # stego.split_into_frames(ffv1_video, hidden_text_length * 50000)
 # endregion

 # region -- video processing RESULT --
 # stego.stitch_frames_to_video(encoded_video)
 stego.concatenate_videos(encoded_video, ffv1_video, final_video)
 # stego.add_audio_to_video(final_video, extracted_audio, final_video)
 # endregion

 # region -- cleanup --
 files_to_delete = [
 extracted_audio + ".aac",
 encoded_video + ".mkv",
 ffv1_video + ".mkv",
 ]

 stego.cleanup(files_to_delete)
 # endregion








Edit for results expectations :
I dont know if there is a way to match the exact color encoding between the stitched png frames and the rest of the ffv1 video. Is there a way I can extract the frames such that the color, encoding or anything I may not know about ffv1 match the original ffv1 video ?


-
FFMPEG "Segmentation fault" with network stream source
23 décembre 2023, par user11186466I use release : 4.2.2 (static) from "https://johnvansickle.com/ffmpeg/"



Final code will be on "Amazon AWS lambda"



Goal : use a url stream and add watermak



Link to video : https://feoval.fr/519.mp4



Link to Watermak : https://feoval.fr/watermark.png



./ffmpeg -i "https://feoval.fr/519.mp4" -i "./watermark.png" -filter_complex "overlay=W-w-10:H-h-10:format=rgb" -f "mp4" -movflags "frag_keyframe+empty_moov" -pix_fmt "yuv420p" test.mp4




return "Segmentation fault"



I have the same error on my computer and on AWS Lambda server



./ffmpeg -i "https://feoval.fr/519.mp4" -f "mp4" -movflags "frag_keyframe+empty_moov" -pix_fmt "yuv420p" test.mp4




work (but not watermak)



./ffmpeg -i "./519.mp4" -i "./watermark.png" -filter_complex "overlay=W-w-10:H-h-10:format=rgb" -f "mp4" -movflags "frag_keyframe+empty_moov" -pix_fmt "yuv420p" test.mp4




work (but not with stream)



Thanks you very much !



Logs for the first case who return "Segmentation fault" :



...
Stream mapping:
Stream #0:0 (h264) -> overlay:main (graph 0)
Stream #1:0 (png) -> overlay:overlay (graph 0)
overlay (graph 0) -> Stream #0:0 (libx264)
Stream #0:1 -> #0:1 (aac (native) -> aac (native))
Press [q] to stop, ? for help
[libx264 @ 0x742e480] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x742e480] profile High, level 3.1, 4:2:0, 8-bit
[libx264 @ 0x742e480] 264 - core 159 r2991 1771b55 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'test.mp4':
Metadata:
major_brand : mp42
minor_version : 1
compatible_brands: isommp41mp42
encoder : Lavf58.29.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 480x848, q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)
Metadata:
encoder : Lavc58.54.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
creation_time : 2020-01-13T08:54:26.000000Z
handler_name : Core Media Audio
encoder : Lavc58.54.100 aac
Segmentation fault (core dumped)



-
Preserving or syncing audio of original video to video fragments
2 mai 2015, par Code_Ed_StudentI currently have a few videos that I want to split into EXACTLY 30 seconds segments. I have been able to accomplish but the audio is not being properly preserve. Its out of sync. I tried playing
arsample
ab
and other libraries but I am not getting the desired outuput. What would be the best way to both split the videos in exactly 30 second frames and preserve the audio ?ffmpeg -i $file -preset medium -map 0 -segment_time 30 -g 225 -r 25 -sc_threshold 0 -force_key_frames expr:gte(t,n_forced*30) -f segment -movflags faststart -vf scale=-1:720,format=yuv420p -vcodec libx264 -crf 20 -codec:a copy $dir/$video_file-%03d.mp4
short snippet of output
Input #0, flv, from '/media/sf_linux_sandbox/hashtag_pull/video-downloads/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843.flv':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
Duration: 00:02:32.84, start: 0.000000, bitrate: 2690 kb/s
Stream #0:0: Video: h264 (Main), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 30.30 fps, 29.97 tbr, 1k tbn, 59.94 tbc
Stream #0:1: Audio: aac (LC), 48000 Hz, stereo, fltp
[libx264 @ 0x3663ba0] using SAR=1/1
[libx264 @ 0x3663ba0] using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64
[libx264 @ 0x3663ba0] profile High, level 3.1
[libx264 @ 0x3663ba0] 264 - core 144 r2 40bb568 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=225 keyint_min=22 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, segment, to '/media/sf_linux_sandbox/hashtag_pull/video-edits/30/5b64d7ab-a669-4016-b55e-fe4720cbd843/5b64d7ab-a669-4016-b55e-fe4720cbd843-%03d.mp4':
Metadata:
moovPosition : 40
avcprofile : 77
avclevel : 31
aacaot : 2
videoframerate : 30
audiochannels : 2
©too : Lavf56.15.102
length : 7334912
sampletype : mp4a
timescale : 48000
encoder : Lavf56.16.102
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc56.19.100 libx264
Stream #0:1: Audio: aac, 48000 Hz, stereo
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (copy)