Recherche avancée

Médias (91)

Autres articles (41)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (10086)

  • Why is my .mp4 file created using cv2.VideoWriter not syncing up with the audio when i combine the video and audio using ffmpeg [closed]

    27 décembre 2024, par joeS125

    The aim of the script is to take text from a text file and put it onto a stock video with an ai reading the text. Similar to those reddit stories on social media with parkour minecraft in the background.

    


    import cv2
import time
from ffpyplayer.player import MediaPlayer
from Transcription import newTranscribeAudio
from pydub import AudioSegment

#get a gpt text generation to create a story based on a prompt, for example sci-fi story and spread it over 3-4 parts
#get stock footage, like minecraft parkour etc
#write text of script on the footage
#create video for each part
#have ai voiceover to read the transcript
cap = cv2.VideoCapture("Stock_Videos\Minecraft_Parkour.mp4")
transcription = newTranscribeAudio("final_us.wav")
player = MediaPlayer("final_us.mp3")
audio = AudioSegment.from_file("final_us.mp3")
story = open("Story.txt", "r").read()
story_split = story.split("||")
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
video_duration = frame_count / fps  # Duration of one loop of the video
fourcc = cv2.VideoWriter_fourcc(*"mp4v")
audio_duration = len(audio) / 1000  # Duration in seconds
video_writer = cv2.VideoWriter(f"CompletedVideo.mp4", fourcc, fps, (1080, 1920))

choice = 0#part of the story choice
part_split = story_split[choice].split(" ")
with open("Segment.txt", "w") as file:
    file.write(story_split[choice])
start_time = time.time()
length = len(part_split) - 1
next_text = []
for j in range(0, length):
    temp = part_split[j].replace("\n", "")
    next_text.append([temp])
index = 0
word_index = 0
frame_size_x = 1080
frame_size_y = 1920
audio_duration = len(audio) / 1000  # Duration in seconds
start_time = time.time()
wait_time = 1 / fps
while (time.time() - start_time) < audio_duration:
    cap.set(cv2.CAP_PROP_POS_FRAMES, 0)  # Restart video
    elapsed_time = time.time() - start_time
    print(video_writer)
    if index >= len(transcription):
        break
    while cap.isOpened():
        # Capture frames in the video 
        ret, frame = cap.read()
        if not ret:
            break
        audio_frame, val = player.get_frame()
        if val == 'eof':  # End of file
            print("Audio playback finished.")
            break
        if index >= len(transcription):
            break
        
        if frame_size_x == -1:
            frame_size_x = frame.shape[1]
            frame_size_y = frame.shape[0]

        elapsed_time = time.time() - start_time

        # describe the type of font 
        # to be used. 
        font = cv2.FONT_HERSHEY_SIMPLEX 
        trans = transcription[index]["words"]
        end_time = trans[word_index]["end"]
        if trans[word_index]["start"] < elapsed_time < trans[word_index]["end"]:
            video_text = trans[word_index]["text"]
        elif elapsed_time >= trans[word_index]["end"]:
            #index += 1
            word_index += 1
        if (word_index >= len(trans)):
            index += 1
            word_index = 0
        # get boundary of this text
        textsize = cv2.getTextSize(video_text, font, 3, 6)[0]
        # get coords based on boundary
        textX = int((frame.shape[1] - textsize[0]) / 2)
        textY = int((frame.shape[0] + textsize[1]) / 2)
        
        cv2.putText(frame,  
                    video_text,  
                    (textX, textY),  
                    font, 3,  
                    (0, 255, 255),  
                    6,  
                    cv2.LINE_4)
        
        # Define the resize scale
        scale_percent = 50  # Resize to 50% of the original size
        # Get new dimensions
        width = 1080
        height = 1920
        new_size = (width, height)

        # Resize the frame
        resized_frame = cv2.resize(frame, new_size)
        video_writer.write(resized_frame)
        cv2.imshow('video', resized_frame)
        cv2.waitKey(wait_time)
        if cv2.waitKey(1) & 0xFF == ord('q'): 
            break
cv2.destroyAllWindows()
video_writer.release()
cap.release()



    


    When I run this script the audio matches the text in the video perfectly and it runs for the correct amount of time to match with the audio (2 min 44 sec). However, the saved video CompletedVideo.mp4 only lasts for 1 min 10 sec. I am unsure why the video has sped up. The fps is 60 fps. If you require any more information please let me know and thanks in advance.

    


    I have tried changing the fps, changing the wait_time after writing each frame. I am expecting the CompletedVideo.mp4 to be 2 min 44 sec long not 1 min 10 sec long.

    


  • Unable to load FFProbe in laravel app

    1er août 2016, par Bullgod

    i have a problem when i upload a video with a form in my server. In the moment of upload, the aplication, must to convert the format of the video to mp4. In my notebook, this convertion work fine but when i try to convert a video in the server, i receive this error :

    ExecutableNotFoundException in FFProbeDriver.php line 50 : Unable to load FFProbe

    This is my form :

    {!! Form::open(['route' =>'upload.store', 'method'=>'POST', 'files'=> true ]) !!}

    <div class="form-group" style="display: none;">
       {!! Form::label('usuario_id', 'usuario_id:') !!}
       {!! Form::text('usuario_id', Auth::user()->id) !!}
    </div>
    <div class="form-group" style="display: none;">
       {!! Form::label('state', 'State:') !!}
       {!! Form::text('state', 0) !!}
    </div>
    <div class="form-group">
       {!! Form::label('asignatura_id', 'Asignatura:') !!}
       {!! Form::select('asignatura_id', $subject) !!}
    </div>
    <div class="form-group">
       {!! Form::label('name', 'Nombre:') !!}
       {!! Form::text('name', null, ['class'=> 'form-control', 'placeholder' => 'Ingresa el nombre de la pelicula']) !!}
    </div>
    <div class="form-group">
       {!! Form::label('language', 'Idioma:') !!}
       {!! Form::text('language', null, ['class'=> 'form-control', 'placeholder' => 'Ingresa la descripcion']) !!}
    </div><div class="form-group">
       {!! Form::label('creation_date', 'Fecha Creación:') !!}
       {!! Form::text('creation_date', null, ['class'=> 'form-control', 'placeholder' => 'Ingresa el nombre de la pelicula']) !!}
    </div>
    <div class="form-group">
       {!! Form::label('description', 'Descripcion:') !!}
       {!! Form::text('description', null, ['class'=> 'form-control', 'placeholder' => 'Ingresa la descripcion']) !!}
    </div>
    <div class="form-group">
       {!! Form::label('imageRef', 'Imagen:') !!}
       {!! Form::file('imageRef') !!}
    </div>

    <div class="form-group">
       {!! Form::label('url', 'Video:') !!}
       {!! Form::file('url') !!}
    </div>
       {!! Form::submit('Registrar',['class' =>'btn btn-primary']) !!}
       {!! Form::close() !!}

    The function in the model :

    public function setUrlAttribute($url){

           $this->attributes['url'] = 'old/'.Carbon::now()->second.$url->getClientOriginalName();
           $name = Carbon::now()->second.$url->getClientOriginalName();
           \Storage::disk('local')->put($name, \File::get($url));

           $file = pathinfo($name,PATHINFO_FILENAME);
           $extension = pathinfo($name,PATHINFO_EXTENSION);

           $ffmpeg = \FFMpeg\FFMpeg::create([
               'ffmpeg.binaries'  => '/usr/bin/ffmpeg.exe',
               'ffprobe.binaries' => '/usr/bin/ffprobe.exe',
               'timeout'          => 0, // The timeout for the underlying process
               'ffmpeg.threads'   => 12,   // The number of threads that FFMpeg should use

           ]);
           $video = $ffmpeg->open($url);
           $format = new FFMpeg\Format\Video\X264('libmp3lame', 'libx264');
           $format->on('progress', function ($video, $format, $percentage) {
               echo "$percentage % transcoded";
           });
           $format
           -> setKiloBitrate(1000)
           -> setAudioChannels(2)
           -> setAudioKiloBitrate(256);

           $video
           ->save($format, 'files/convert/'.$file.'.mp4');
           $this->attributes['url'] = $file.'.mp4';
       }

    if i convert the video in the console i receive this :

    # ffmpeg -i 67portrait.MOV -vcodec libx264 new.mp4
    FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
     built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
     configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdc1394 --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
     libavutil     50.15. 1 / 50.15. 1
     libavcodec    52.72. 2 / 52.72. 2
     libavformat   52.64. 2 / 52.64. 2
     libavdevice   52. 2. 0 / 52. 2. 0
     libavfilter    1.19. 0 /  1.19. 0
     libswscale     0.11. 0 /  0.11. 0
     libpostproc   51. 2. 0 / 51. 2. 0

    Seems stream 1 codec frame rate differs from container frame rate: 1200.00 (1200/1) -> 29.97 (30000/1001)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '67portrait.MOV':
     Metadata:
       major_brand     : qt  
       minor_version   : 0
       compatible_brands: qt  
       date            : 2013-11-29T13:19:09+0100
       date-fra        : 2013-11-29T13:19:09+0100
     Duration: 00:00:02.08, start: 0.000000, bitrate: 926 kb/s
       Stream #0.0(und): Audio: aac, 44100 Hz, mono, s16, 60 kb/s
       Stream #0.1(und): Video: h264, yuv420p, 568x320, 863 kb/s, 29.98 fps, 29.97 tbr, 600 tbn, 1200 tbc
    [libx264 @ 0x24bab70]broken ffmpeg default settings detected
    [libx264 @ 0x24bab70]use an encoding preset (e.g. -vpre medium)
    [libx264 @ 0x24bab70]preset usage: -vpre <speed> -vpre <profile>
    [libx264 @ 0x24bab70]speed presets are listed in x264 --help
    [libx264 @ 0x24bab70]profile is optional; x264 defaults to high
    Output #0, mp4, to 'new.mp4':
       Stream #0.0(und): Video: libx264, yuv420p, 568x320, q=2-31, 200 kb/s, 90k tbn, 29.97 tbc
       Stream #0.1(und): Audio: libfaac, 44100 Hz, mono, s16, 64 kb/s
    Stream mapping:
     Stream #0.1 -> #0.0
     Stream #0.0 -> #0.1
    Error while opening encoder for output stream #0.0 - maybe incorrect parameters such as bit_rate, rate, width or height
    </profile></speed>

    i have tried a lot of thinks to resolve this problem but steel happend.

  • I created a Python code to capture live video using FFmpeg, but the output screen only shows noise

    16 octobre 2024, par chun3 hyun

    The code below is Python code that made my computer screen video capture in real time via ffmpeg.

    &#xA;

    When I run the code below, it goes well until a new window named 'Captured Frame' is created. But this 'Captured Frame' window doesn't show the full screen of my computer, and the gray screen is generating a lot of noise.

    &#xA;

    import cv2&#xA;import numpy as np&#xA;import subprocess&#xA;&#xA;def frame_capture():&#xA;    # Set FFmpeg command (capture desired window or area)&#xA;    ffmpeg_command = [&#xA;        &#x27;ffmpeg&#x27;,&#xA;        &#x27;-f&#x27;, &#x27;gdigrab&#x27;,  # Windows screen capture (using gdigrab)&#xA;        &#x27;-framerate&#x27;, &#x27;30&#x27;,  # Setting the Frame Speed&#xA;        &#x27;-i&#x27;, &#x27;desktop&#x27;,  # What to capture (for example, full screen)&#xA;        &#x27;-pix_fmt&#x27;, &#x27;bgr0&#x27;,&#xA;        &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,  # Video codec settings&#xA;        &#x27;-tune&#x27;, &#x27;zerolatency&#x27;,&#xA;        &#x27;-an&#x27;,  # Disable audio&#xA;        &#x27;-sn&#x27;,  # Disable Caption&#xA;        &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-&#x27;&#xA;    ]&#xA;&#xA;    # Running the FFmpeg process&#xA;    process = subprocess.Popen(ffmpeg_command, stdout=subprocess.PIPE, bufsize=10**8)&#xA;&#xA;    while True:&#xA;        # Read Frame from FFmpeg (Resolution Example: 1920x1080)&#xA;        raw_frame = process.stdout.read(1920 * 1080 * 3)  # 1920x1080 resolution, BGR format&#xA;        if not raw_frame:&#xA;            break  # Shut down the loop when you can no longer receive frames&#xA;&#xA;        # Converting frame data to a numpy array&#xA;        frame = np.frombuffer(raw_frame, np.uint8).reshape((1080, 1920, 3))&#xA;&#xA;        # Add frame processing code here&#xA;        # Example: Showing a frame on the screen&#xA;        cv2.imshow(&#x27;Captured Frame&#x27;, frame)&#xA;&#xA;        # Press the &#x27;q&#x27; key to end&#xA;        if cv2.waitKey(1) &amp; 0xFF == ord(&#x27;q&#x27;):&#xA;            break&#xA;&#xA;    # End of process and release of resources&#xA;    process.stdout.close()&#xA;    process.wait()&#xA;    cv2.destroyAllWindows()&#xA;frame_capture()&#xA;

    &#xA;

    What could I have done wrong ? When I directly input the FFmpeg command in the Windows command prompt(knows as 'cmd') as shown below to save the video (in .mp4 format), I can see that the screen is output normally in the saved file. It seems that FFmpeg itself is installed correctly, but I don't know what the cause is.

    &#xA;

    hwnd=132554 -pix_fmt yuv420p -vf "scale=iw-mod(iw\,2):ih-mod(ih\,2)" -draw_mouse 1 -t 10 output.mp4&#xA;

    &#xA;

    The handle number written above was the handle of the active Chrome window on my computer.

    &#xA;

    My ffmpeg version is 2024-10-10-git-0f5592cfc7-full_build-www.gyan.dev My Python version is 3.12.4&#xA;My Windows version and build are as specified below.&#xA;:Windows 11 Home, 10.0.22631

    &#xA;

    Capturing the computer screen with FFmpeg. I tried it, but the output screen shows only noise.

    &#xA;