Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (57)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (6041)

  • ffmpeg concatenating videos of different fps while keeping the total length not changed

    23 novembre 2017, par A_Matar

    I wanna pad an mp4 video stream with another video clip of a static image that I created using :

    def generate_white_vid (duration):
       output_filename = os.path.join(p_path,'white_vid_'+" 0:.2f}".format(duration)+'.mp4')
       ffmpeg_create_vid_from_static_img = 'ffmpeg -loop 1 -i /path/WhiteBackground.jpg -c:v libx264 -t %f -pix_fmt yuv420p -vf scale=1920:1080 %s' % (duration, output_filename)
       p = subprocess.Popen(ffmpeg_create_vid_from_static_img, shell=True)
       p.communicate()
       return output_filename

    I use the following to concatenate :

    def concat_vids(clip_paths):
       filenames_txt = open('clips_to_join.txt','w')
       for clip in clip_paths:
           filenames_txt.write ('file \''+ clip+'\'\n')
       filenames_txt.close()
       output_filename = clip_paths[0].split('.', 2)[0]
       output_file_path = os.path.join(root_path, output_filename+'-padded.mp4')
       # join the clips
       ffmpeg_command = ["ffmpeg", "-f", "concat", "-safe", "0", "-i", "clips_to_join.txt", "-codec", "copy", output_file_path] # output_filename = ch0X-start_time-end_time
       p = subprocess.Popen(ffmpeg_command)
       p.communicate() # wait till the subprocess finishes. You can send commands to process as well.
       return output_file_path

    When I check the length of the resulting video after concatenation, I find that it is not equal to the sum of the two segments that I concatenated, and sometimes it is even less by some seconds !!

    Here is how I get the video length in seconds :

    def ffmpeg_len(vid_path):
       '''
       Returns length in seconds using ffmpeg
       '''
       ffmpeg_get_mediafile_length = ['sh', '-c', 'ffmpeg -i "$1" 2>&1 | grep Duration', '_', vid_path]
       p = subprocess.Popen(ffmpeg_get_mediafile_length, stdout=subprocess.PIPE, stderr=subprocess.PIPE)    
       output, err = p.communicate()
       length_regexp = 'Duration: (\d{2}):(\d{2}):(\d{2})(\.\d+),'
       re_length = re.compile(length_regexp)
       matches = re_length.search(output)
       if matches:
           video_length = int(matches.group(1)) * 3600 + \
                           int(matches.group(2)) * 60 + \
                           int(matches.group(3)) + float(matches.group(4))
           return video_length
       else:
           print("Can't determine video length.")
           print err
           raise SystemExit

    My guess is that maybe the concatenation unifies the fps rate for the all the clips to be joined, if this is the case, how to prevent this from happening ? How can I get a video of the desired length exactly.

    Maybe it worth mentioning that the video to padded is very short 0.42 second, the original video is 210.58 and the resultant video is 210.56 !

    I have verified that ffmpeg does generate the desired padding region and it is of the desired length 0.42 I got a 0.43 segment when I forced 30 fps but it is okay.

  • I'm trying to hide information in a H264 video. When I stitch the video up, split it into frames again and try to read it, the information is lost

    18 mai 2024, par Wer Wer

    I'm trying to create a video steganography python script. The algorithm for hiding will be...

    


      

    1. convert any video codec into h264 lossless
    2. 


    3. save the audio of the video and split the h264 video into frames
    4. 


    5. hide my txt secret into frame0 using LSB replacement method
    6. 


    7. stitch the video back up and put in the audio
    8. 


    


    ...and when I want to recover the text, I'll

    


      

    1. save the audio of the video and split the encoded h264 video into frames
    2. 


    3. retrieve my hidden text from frame0 and print the text
    4. 


    


    So, this is what I can do :

    


      

    1. split the video
    2. 


    3. hide the text in frame0
    4. 


    5. retrieve the text from frame0
    6. 


    7. stitch the video
    8. 


    


    But after stitching the video, when I tried to retrieve the text by splitting that encrypted video, it appears that the text has been lost. This is because i got the error

    


    UnicodeEncodeError: &#x27;charmap&#x27; codec can&#x27;t encode character &#x27;\x82&#x27; in position 21: character maps to <undefined>&#xA;</undefined>

    &#xA;

    I'm not sure if my LSB replacement algorithm was lost, which results in my not being able to retrieve my frame 0 information, or if the H264 conversion command I used was a converted my video into H264 lossy version instead of lossless (which I don't believe so because I specified -qp 0)&#xA;This was the command I used to convert my video

    &#xA;

    ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;

    &#xA;

    These are my codes

    &#xA;

    import json&#xA;import os&#xA;import magic&#xA;import ffmpeg&#xA;import cv2&#xA;import numpy as np&#xA;&#xA;import subprocess&#xA;&#xA;# Path to the file you want to check&#xA;here = os.path.dirname(os.path.abspath(__file__))&#xA;file_path = os.path.join(here, "output.mp4")&#xA;raw_video = cv2.VideoCapture(file_path)&#xA;audio_output_path = os.path.join(here, "audio.aac")&#xA;final_video_file = os.path.join(here, "output.mp4")&#xA;&#xA;# create a folder to save the frames.&#xA;frames_directory = os.path.join(here, "data1")&#xA;try:&#xA;    if not os.path.exists(frames_directory):&#xA;        os.makedirs(frames_directory)&#xA;except OSError:&#xA;    print("Error: Creating directory of data")&#xA;&#xA;file_path_txt = os.path.join(here, "hiddentext.txt")&#xA;# Read the content of the file in binary mode&#xA;with open(file_path_txt, "r") as f:&#xA;    file_content = f.read()&#xA;# txt_binary_representation = "".join(format(byte, "08b") for byte in file_content)&#xA;# print(file_content)&#xA;&#xA;"""&#xA;use this cmd to convert any video to h264 lossless. original vid in 10 bit depth format&#xA;ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;&#xA;use this cmd to convert any video to h264 lossless. original vid in 8 bit depth format&#xA;ffmpeg -i video.mp4 -c:v libx264 -preset veryslow -crf 0 output.mp4&#xA;&#xA;i used this command to only get first 12 sec of video because the h264 vid is too large &#xA;ffmpeg -i video.mp4 -t 12 -c:v libx264 -preset veryslow -qp 0 output.mp4&#xA;&#xA;check for multiple values to ensure its h264 lossless:&#xA;1. CRF = 0&#xA;2. qp = 0&#xA;3. High 4:4:4 Predictive&#xA;"""&#xA;&#xA;&#xA;# region --codec checking. ensure video is h264 lossless--&#xA;def check_h264_lossless(file_path):&#xA;    try:&#xA;        # Use ffprobe to get detailed codec information, including tags&#xA;        result = subprocess.run(&#xA;            [&#xA;                "ffprobe",&#xA;                "-v",&#xA;                "error",&#xA;                "-show_entries",&#xA;                "stream=codec_name,codec_long_name,profile,level,bit_rate,avg_frame_rate,nb_frames,tags",&#xA;                "-of",&#xA;                "json",&#xA;                file_path,&#xA;            ],&#xA;            stdout=subprocess.PIPE,&#xA;            stderr=subprocess.PIPE,&#xA;            text=True,&#xA;        )&#xA;        # Check if the file is lossless&#xA;        metadata = check_h264_lossless(file_path)&#xA;        print(json.dumps(metadata, indent=4))&#xA;&#xA;        # Check if the CRF value is available in the tags&#xA;        for stream in metadata.get("streams", []):&#xA;            if stream.get("codec_name") == "h264":&#xA;                tags = stream.get("tags", {})&#xA;                crf_value = tags.get("crf")&#xA;                encoder = tags.get("encoder")&#xA;                print(f"CRF value: {crf_value}")&#xA;                print(f"Encoder: {encoder}")&#xA;        return json.loads(result.stdout)&#xA;    except Exception as e:&#xA;        return f"An error occurred: {e}"&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;# region --splitting video into frames--&#xA;def extract_audio(input_video_path, audio_output_path):&#xA;    if os.path.exists(audio_output_path):&#xA;        print(f"Audio file {audio_output_path} already exists. Skipping extraction.")&#xA;        return&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-i",&#xA;        input_video_path,&#xA;        "-q:a",&#xA;        "0",&#xA;        "-map",&#xA;        "a",&#xA;        audio_output_path,&#xA;    ]&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Audio successfully extracted to {audio_output_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;def split_into_frames():&#xA;    extract_audio(file_path, audio_output_path)&#xA;    currentframe = 0&#xA;    print("Splitting...")&#xA;    while True:&#xA;        ret, frame = raw_video.read()&#xA;        if ret:&#xA;            name = os.path.join(here, "data1", f"frame{currentframe}.png")&#xA;            # print("Creating..." &#x2B; name)&#xA;            cv2.imwrite(name, frame)&#xA;            currentframe &#x2B;= 1&#xA;        else:&#xA;            print("Complete")&#xA;            break&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;# region --merge all back into h264 lossless--&#xA;# output_video_file = "output1111.mp4"&#xA;&#xA;&#xA;def stitch_frames_to_video(frames_dir, output_video_path, framerate=60):&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-y",&#xA;        "-framerate",&#xA;        str(framerate),&#xA;        "-i",&#xA;        os.path.join(frames_dir, "frame%d.png"),&#xA;        "-c:v",&#xA;        "libx264",&#xA;        "-preset",&#xA;        "veryslow",&#xA;        "-qp",&#xA;        "0",&#xA;        output_video_path,&#xA;    ]&#xA;&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Video successfully created at {output_video_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;def add_audio_to_video(video_path, audio_path, final_output_path):&#xA;    command = [&#xA;        "ffmpeg",&#xA;        "-i",&#xA;        video_path,&#xA;        "-i",&#xA;        audio_path,&#xA;        "-c:v",&#xA;        "copy",&#xA;        "-c:a",&#xA;        "aac",&#xA;        "-strict",&#xA;        "experimental",&#xA;        final_output_path,&#xA;    ]&#xA;    try:&#xA;        subprocess.run(command, check=True)&#xA;        print(f"Final video with audio created at {final_output_path}")&#xA;    except subprocess.CalledProcessError as e:&#xA;        print(f"An error occurred: {e}")&#xA;&#xA;&#xA;# endregion&#xA;&#xA;&#xA;def to_bin(data):&#xA;    if isinstance(data, str):&#xA;        return "".join([format(ord(i), "08b") for i in data])&#xA;    elif isinstance(data, bytes) or isinstance(data, np.ndarray):&#xA;        return [format(i, "08b") for i in data]&#xA;    elif isinstance(data, int) or isinstance(data, np.uint8):&#xA;        return format(data, "08b")&#xA;    else:&#xA;        raise TypeError("Type not supported")&#xA;&#xA;&#xA;def encode(image_name, secret_data):&#xA;    image = cv2.imread(image_name)&#xA;    n_bytes = image.shape[0] * image.shape[1] * 3 // 8&#xA;    print("[*] Maximum bytes to encode:", n_bytes)&#xA;    secret_data &#x2B;= "====="&#xA;    if len(secret_data) > n_bytes:&#xA;        raise ValueError("[!] Insufficient bytes, need bigger image or less data")&#xA;    print("[*] Encoding Data")&#xA;&#xA;    data_index = 0&#xA;    binary_secret_data = to_bin(secret_data)&#xA;    data_len = len(binary_secret_data)&#xA;    for row in image:&#xA;        for pixel in row:&#xA;            r, g, b = to_bin(pixel)&#xA;            if data_index &lt; data_len:&#xA;                pixel[0] = int(r[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index &lt; data_len:&#xA;                pixel[1] = int(g[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index &lt; data_len:&#xA;                pixel[2] = int(b[:-1] &#x2B; binary_secret_data[data_index], 2)&#xA;                data_index &#x2B;= 1&#xA;            if data_index >= data_len:&#xA;                break&#xA;    return image&#xA;&#xA;&#xA;def decode(image_name):&#xA;    print("[&#x2B;] Decoding")&#xA;    image = cv2.imread(image_name)&#xA;    binary_data = ""&#xA;    for row in image:&#xA;        for pixel in row:&#xA;            r, g, b = to_bin(pixel)&#xA;            binary_data &#x2B;= r[-1]&#xA;            binary_data &#x2B;= g[-1]&#xA;            binary_data &#x2B;= b[-1]&#xA;    all_bytes = [binary_data[i : i &#x2B; 8] for i in range(0, len(binary_data), 8)]&#xA;    decoded_data = ""&#xA;    for byte in all_bytes:&#xA;        decoded_data &#x2B;= chr(int(byte, 2))&#xA;        if decoded_data[-5:] == "=====":&#xA;            break&#xA;    return decoded_data[:-5]&#xA;&#xA;&#xA;frame0_path = os.path.join(here, "data1", "frame0.png")&#xA;encoded_image_path = os.path.join(here, "data1", "frame0.png")&#xA;&#xA;&#xA;def encoding_function():&#xA;    split_into_frames()&#xA;&#xA;    encoded_image = encode(frame0_path, file_content)&#xA;    cv2.imwrite(encoded_image_path, encoded_image)&#xA;&#xA;    stitch_frames_to_video(frames_directory, file_path)&#xA;    add_audio_to_video(file_path, audio_output_path, final_video_file)&#xA;&#xA;&#xA;def decoding_function():&#xA;    split_into_frames()&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;&#xA;&#xA;# encoding_function()&#xA;decoding_function()&#xA;&#xA;

    &#xA;

    So I tried to put my decoding function into my encoding function like this

    &#xA;

    def encoding_function():&#xA;    split_into_frames()&#xA;&#xA;    encoded_image = encode(frame0_path, file_content)&#xA;    cv2.imwrite(encoded_image_path, encoded_image)&#xA;&#xA;#immediately get frame0 and decode without stitching to check if the data is there&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;&#xA;    stitch_frames_to_video(frames_directory, file_path)&#xA;    add_audio_to_video(file_path, audio_output_path, final_video_file)&#xA;&#xA;

    &#xA;

    This returns my secret text from frame0. But splitting it after stitching does not return my hidden text. The hidden text was lost

    &#xA;

    def decoding_function():&#xA;    split_into_frames()&#xA;#this function is after the encoding_function(). the secret text is lost, resulting in charmap codec #can&#x27;t encode error&#xA;    decoded_message = decode(encoded_image_path)&#xA;    print(f"[&#x2B;] Decoded message: {decoded_message}")&#xA;

    &#xA;

    EDIT :&#xA;So i ran the encoding function first, copied frame0.png out and placed it some where. Then I ran the decoding function, and got another frame0.png.

    &#xA;

    I ran both frame0.png into this python function

    &#xA;

    frame0_data1_path = os.path.join(here, "data1", "frame0.png")&#xA;frame0_data2_path = os.path.join(here, "data2", "frame0.png")&#xA;frame0_data1 = cv2.imread(frame0_data1_path)&#xA;frame0_data2 = cv2.imread(frame0_data2_path)&#xA;&#xA;if frame0_data1 is None:&#xA;    print(f"Error: Could not load image from {frame0_data1_path}")&#xA;elif frame0_data2 is None:&#xA;    print(f"Error: Could not load image from {frame0_data2_path}")&#xA;else:&#xA;&#xA;    if np.array_equal(frame0_data1, frame0_data2):&#xA;        print("The frames are identical.")&#xA;    else:&#xA;        print("The frames are different.")&#xA;

    &#xA;

    ...and apparently both are different. This means my frame0 binary got changed when I stitch back into the video after encoding. Is there a way to make it not change ? Or will h264 or any video codec change a little bit when you stitch the frames back up ?

    &#xA;

  • Play a video with ffmpeg and SDL2 on a Raspberry Pi 5

    18 février 2024, par aforino

    I want to create a python script that decodes a h264 1080p video and outputs it via SDL2 on a Raspberry Pi 5. The Raspberry Pi 5 is able to play a h264 1080p video without problem using VLC. Total CPU load with VLC is about 10%. However decoding with ffmpeg and outputting via SDL2 uses around 70% CPU load. Since I want to be able to switch seamlessly between two output videos I will need to decode two videos at the same time. Therefore 70% CPU load for one transcoded 1080p video is not acceptable. How can I make the code more efficient and why is VLC so much more efficient ?

    &#xA;

    This is my current python script :

    &#xA;

    import numpy as np&#xA;import ffmpeg  # ffmpeg-python&#xA;import sdl2.ext&#xA;&#xA;in_file = ffmpeg.input(&#x27;bbb1080_x264.mp4&#x27;, re=None)&#xA;&#xA;width = 1920&#xA;height = 1080&#xA;&#xA;process1 = (&#xA;    in_file&#xA;    .output(&#x27;pipe:&#x27;, format=&#x27;rawvideo&#x27;, pix_fmt=&#x27;bgra&#x27;)&#xA;    .run_async(pipe_stdout=True)&#xA;)&#xA;&#xA;sdl2.ext.init()&#xA;window = sdl2.ext.Window("Hello World!", size=(width, height))&#xA;window.show()&#xA;windowsurface = sdl2.SDL_GetWindowSurface(window.window)&#xA;windowArray = sdl2.ext.pixels3d(windowsurface.contents)&#xA;&#xA;sdl2.ext.mouse.hide_cursor()&#xA;&#xA;while True:&#xA;    in_bytes = process1.stdout.read(width * height * 4)&#xA;&#xA;    if not in_bytes:&#xA;        break&#xA;&#xA;    in_frame = (&#xA;        np&#xA;        .frombuffer(in_bytes, np.uint8)&#xA;        .reshape([height, width, 4])&#xA;        .transpose(1, 0, 2)&#xA;    )&#xA;&#xA;    for event in sdl2.ext.get_events():&#xA;        if event.type == sdl2.SDL_QUIT:&#xA;            exit()&#xA;&#xA;    windowArray[:] = in_frame&#xA;    window.refresh()&#xA;&#xA;process1.wait()&#xA;

    &#xA;

    Also it is interesting to note that when I start VLC on a Raspberry Pi 5 this is the output on the terminal

    &#xA;

    [00007fff78c1a550] avcodec decoder error: cannot start codec (h264_v4l2m2m)&#xA;Fontconfig warning: ignoring UTF-8: not a valid region tag&#xA;[00007fff68002d70] gles2 generic error: parent window not available&#xA;[00007fff68002d70] xcb generic error: window not available&#xA;[00007fff680013f0] mmal_xsplitter vout display: Try drm&#xA;[00007fff68002d70] drm_vout generic: &lt;&lt;&lt; OpenDrmVout: Fmt=I420&#xA;[00007fff68002d70] drm_vout generic error: Failed to get xlease`&#xA;

    &#xA;

    It indicates that VLC is not using the h264_v4l2m2m hardware acceleration.

    &#xA;