Recherche avancée

Médias (91)

Autres articles (69)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

  • Le plugin : Gestion de la mutualisation

    2 mars 2010, par

    Le plugin de Gestion de mutualisation permet de gérer les différents canaux de mediaspip depuis un site maître. Il a pour but de fournir une solution pure SPIP afin de remplacer cette ancienne solution.
    Installation basique
    On installe les fichiers de SPIP sur le serveur.
    On ajoute ensuite le plugin "mutualisation" à la racine du site comme décrit ici.
    On customise le fichier mes_options.php central comme on le souhaite. Voilà pour l’exemple celui de la plateforme mediaspip.net :
    < ?php (...)

Sur d’autres sites (6455)

  • Converting python program with custom audio ffmpeg command to rust

    4 octobre 2022, par rust_convert

    To work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty, so I have been looking into the 'Pipe'-ing of data and cannot sort out what's wrong.

    &#xA;

    The python code snippet for video and audio conversion :

    &#xA;

    output=open(self.outputFile, &#x27;wb&#x27;)&#xA;devnull = open(os.devnull, &#x27;wb&#x27;)&#xA;&#xA;vidcommand = [ FFMPEG_BIN,&#xA;            &#x27;-i&#x27;, self.inputFile,&#xA;            &#x27;-f&#x27;, &#x27;image2pipe&#x27;,&#xA;            &#x27;-r&#x27;, &#x27;%d&#x27; % (self.outputFrameRate),&#xA;            &#x27;-vf&#x27;, scaleCommand,&#xA;            &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;            &#x27;-pix_fmt&#x27;, &#x27;bgr565be&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-&#x27;]&#xA;        &#xA;vidPipe = &#x27;&#x27;;&#xA;if os.name==&#x27;nt&#x27; :&#xA;    startupinfo = sp.STARTUPINFO()&#xA;    startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)&#xA;else:&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)&#xA;&#xA;vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)&#xA;&#xA;audioCommand = [ FFMPEG_BIN,&#xA;    &#x27;-i&#x27;, self.inputFile,&#xA;    &#x27;-f&#x27;, &#x27;s16le&#x27;,&#xA;    &#x27;-acodec&#x27;, &#x27;pcm_s16le&#x27;,&#xA;    &#x27;-ar&#x27;, &#x27;%d&#x27; % (self.outputAudioSampleRate),&#xA;    &#x27;-ac&#x27;, &#x27;1&#x27;,&#xA;    &#x27;-&#x27;]&#xA;&#xA;audioPipe=&#x27;&#x27;&#xA;if (self.audioEnable.get() == 1):&#xA;    if os.name==&#x27;nt&#x27; :&#xA;        startupinfo = sp.STARTUPINFO()&#xA;        startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)&#xA;    else:&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)&#xA;&#xA;    audioFrame = audioPipe.stdout.read(self.audioFrameBytes) &#xA;&#xA;currentFrame=0;&#xA;&#xA;while len(vidFrame)==self.inputVidFrameBytes:&#xA;    currentFrame&#x2B;=1&#xA;    if(currentFrame%30==0):&#xA;        self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)&#xA;    if (self.videoBitDepth.get() == 16):&#xA;        output.write(vidFrame)&#xA;    else:&#xA;        b16VidFrame=bytearray(vidFrame)&#xA;        b8VidFrame=[]&#xA;        for p in range(self.outputVidFrameBytes):&#xA;            b8VidFrame.append(((b16VidFrame[(p*2)&#x2B;0]>>0)&amp;0xE0)|((b16VidFrame[(p*2)&#x2B;0]&lt;&lt;2)&amp;0x1C)|((b16VidFrame[(p*2)&#x2B;1]>>3)&amp;0x03))&#xA;        output.write(bytearray(b8VidFrame))&#xA;&#xA;    vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?&#xA;    if (self.audioEnable.get() == 1):&#xA;&#xA;&#xA;        if len(audioFrame)==self.audioFrameBytes:&#xA;            audioData=bytearray(audioFrame) &#xA;&#xA;            for j in range(int(round(self.audioFrameBytes/2))):&#xA;                sample = ((audioData[(j*2)&#x2B;1]&lt;&lt;8) | audioData[j*2]) &#x2B; 0x8000&#xA;                sample = (sample>>(16-self.outputAudioSampleBitDepth)) &amp; (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))&#xA;&#xA;                audioData[j*2] = sample &amp; 0xFF&#xA;                audioData[(j*2)&#x2B;1] = sample>>8&#xA;&#xA;            output.write(audioData)&#xA;            audioFrame = audioPipe.stdout.read(self.audioFrameBytes)&#xA;&#xA;        else:&#xA;            emptySamples=[]&#xA;            for samples in range(int(round(self.audioFrameBytes/2))):&#xA;                emptySamples.append(0x00)&#xA;                emptySamples.append(0x00)&#xA;            output.write(bytearray(emptySamples))&#xA;&#xA;self.progressBarVar.set(100.0)&#xA;&#xA;vidPipe.terminate()&#xA;vidPipe.stdout.close()&#xA;vidPipe.wait()&#xA;&#xA;if (self.audioEnable.get() == 1):&#xA;    audioPipe.terminate()&#xA;    audioPipe.stdout.close()&#xA;    audioPipe.wait()&#xA;&#xA;output.close()&#xA;

    &#xA;

    The Rust snippet that should accomplish the same goals :

    &#xA;

    let output_file = OpenOptions::new()&#xA;    .create(true)&#xA;    .truncate(true)&#xA;    .write(true)&#xA;    .open(&amp;output_path)&#xA;    .unwrap();&#xA;let mut writer = BufWriter::with_capacity(&#xA;    options.video_frame_bytes.max(options.audio_frame_bytes),&#xA;    output_file,&#xA;);&#xA;let ffmpeg_path = sidecar_path("ffmpeg");&#xA;#[cfg(debug_assertions)]&#xA;let timer = Instant::now();&#xA;&#xA;let mut video_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;video_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "image2pipe",&#xA;    "-r", options.frame_rate,&#xA;    "-vf", options.scale,&#xA;    "-vcodec", "rawvideo",&#xA;    "-pix_fmt", "bgr565be",&#xA;    "-f", "rawvideo",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window&#xA;// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags&#xA;#[cfg(windows)]&#xA;video_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut video_child = video_cmd.spawn().unwrap();&#xA;let mut video_stdout = video_child.stdout.take().unwrap();&#xA;let mut video_frame = vec![0; options.video_frame_bytes];&#xA;&#xA;let mut audio_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;audio_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "s16le",&#xA;    "-acodec", "pcm_s16le",&#xA;    "-ar", options.sample_rate,&#xA;    "-ac", "1",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;#[cfg(windows)]&#xA;audio_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut audio_child = audio_cmd.spawn().unwrap();&#xA;let mut audio_stdout = audio_child.stdout.take().unwrap();&#xA;let mut audio_frame = vec![0; options.audio_frame_bytes];&#xA;&#xA;while video_stdout.read_exact(&amp;mut video_frame).is_ok() {&#xA;    writer.write_all(&amp;video_frame).unwrap();&#xA;&#xA;    if audio_stdout.read_to_end(&amp;mut audio_frame).is_ok() {&#xA;        if audio_frame.len() == options.audio_frame_bytes {&#xA;            for i in 0..options.audio_frame_bytes / 2 {&#xA;                let temp_sample = ((u32::from(audio_frame[(i * 2) &#x2B; 1]) &lt;&lt; 8)&#xA;                    | u32::from(audio_frame[i * 2]))&#xA;                    &#x2B; 0x8000;&#xA;                let sample = (temp_sample >> (16 - 10)) &amp; (0x0000FFFF >> (16 - 10));&#xA;&#xA;                audio_frame[i * 2] = (sample &amp; 0xFF) as u8;&#xA;                audio_frame[(i * 2) &#x2B; 1] = (sample >> 8) as u8;&#xA;            }&#xA;        } else {&#xA;            audio_frame.fill(0x00);&#xA;        }&#xA;    }&#xA;    writer.write_all(&amp;audio_frame).unwrap();&#xA;}&#xA;&#xA;&#xA;video_child.wait().unwrap();&#xA;audio_child.wait().unwrap();&#xA;&#xA;#[cfg(debug_assertions)]&#xA;{&#xA;    let elapsed = timer.elapsed();&#xA;    dbg!(elapsed);&#xA;}&#xA;&#xA;writer.flush().unwrap();&#xA;

    &#xA;

    I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so it's likely that the audio conversion isn't working at all and this shows only the video data. There is also a screenshot of the hex data from the working python program that converts the video and audio correctly.

    &#xA;

    HxD Python program hex output :

    &#xA;

    HxD Python program hex output

    &#xA;

    HxD Rust program hex output :

    &#xA;

    HxD Rust program hex output

    &#xA;

  • How do terminal pipes in Python differ from those in Rust ?

    5 octobre 2022, par rust_convert

    To work on learning Rust (in a Tauri project) I am converting a Python 2 program that uses ffmpeg to create a custom video format from a GUI. The video portion converts successfully, but I am unable to get the audio to work. With the debugging I have done for the past few days, it looks like I am not able to read in the audio data in Rust correctly from the terminal pipe - what is working to read in the video data is not working for the audio. I have tried reading in the audio data as a string and then converting it to bytes but then the byte array appears empty. I have been researching the 'Pipe'-ing of data from the rust documentation and python documentation and am unsure how the Rust pipe could be empty or incorrect if it's working for the video.

    &#xA;

    From this python article and this rust stack overflow exchange, it looks like the python stdout pipe is equivalent to the rust stdin pipe ?

    &#xA;

    The python code snippet for video and audio conversion :

    &#xA;

    output=open(self.outputFile, &#x27;wb&#x27;)&#xA;devnull = open(os.devnull, &#x27;wb&#x27;)&#xA;&#xA;vidcommand = [ FFMPEG_BIN,&#xA;            &#x27;-i&#x27;, self.inputFile,&#xA;            &#x27;-f&#x27;, &#x27;image2pipe&#x27;,&#xA;            &#x27;-r&#x27;, &#x27;%d&#x27; % (self.outputFrameRate),&#xA;            &#x27;-vf&#x27;, scaleCommand,&#xA;            &#x27;-vcodec&#x27;, &#x27;rawvideo&#x27;,&#xA;            &#x27;-pix_fmt&#x27;, &#x27;bgr565be&#x27;,&#xA;            &#x27;-f&#x27;, &#x27;rawvideo&#x27;, &#x27;-&#x27;]&#xA;        &#xA;vidPipe = &#x27;&#x27;;&#xA;if os.name==&#x27;nt&#x27; :&#xA;    startupinfo = sp.STARTUPINFO()&#xA;    startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10, startupinfo=startupinfo)&#xA;else:&#xA;    vidPipe=sp.Popen(vidcommand, stdin = sp.PIPE, stdout = sp.PIPE, stderr = devnull, bufsize=self.inputVidFrameBytes*10)&#xA;&#xA;vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes)&#xA;&#xA;audioCommand = [ FFMPEG_BIN,&#xA;    &#x27;-i&#x27;, self.inputFile,&#xA;    &#x27;-f&#x27;, &#x27;s16le&#x27;,&#xA;    &#x27;-acodec&#x27;, &#x27;pcm_s16le&#x27;,&#xA;    &#x27;-ar&#x27;, &#x27;%d&#x27; % (self.outputAudioSampleRate),&#xA;    &#x27;-ac&#x27;, &#x27;1&#x27;,&#xA;    &#x27;-&#x27;]&#xA;&#xA;audioPipe=&#x27;&#x27;&#xA;if (self.audioEnable.get() == 1):&#xA;    if os.name==&#x27;nt&#x27; :&#xA;        startupinfo = sp.STARTUPINFO()&#xA;        startupinfo.dwFlags |= sp.STARTF_USESHOWWINDOW&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10, startupinfo=startupinfo)&#xA;    else:&#xA;        audioPipe = sp.Popen(audioCommand, stdin = sp.PIPE, stdout=sp.PIPE, stderr = devnull, bufsize=self.audioFrameBytes*10)&#xA;&#xA;    audioFrame = audioPipe.stdout.read(self.audioFrameBytes) &#xA;&#xA;currentFrame=0;&#xA;&#xA;while len(vidFrame)==self.inputVidFrameBytes:&#xA;    currentFrame&#x2B;=1&#xA;    if(currentFrame%30==0):&#xA;        self.progressBarVar.set(100.0*(currentFrame*1.0)/self.totalFrames)&#xA;    if (self.videoBitDepth.get() == 16):&#xA;        output.write(vidFrame)&#xA;    else:&#xA;        b16VidFrame=bytearray(vidFrame)&#xA;        b8VidFrame=[]&#xA;        for p in range(self.outputVidFrameBytes):&#xA;            b8VidFrame.append(((b16VidFrame[(p*2)&#x2B;0]>>0)&amp;0xE0)|((b16VidFrame[(p*2)&#x2B;0]&lt;&lt;2)&amp;0x1C)|((b16VidFrame[(p*2)&#x2B;1]>>3)&amp;0x03))&#xA;        output.write(bytearray(b8VidFrame))&#xA;&#xA;    vidFrame = vidPipe.stdout.read(self.inputVidFrameBytes) # Read where vidframe is to match up with audio frame and output?&#xA;    if (self.audioEnable.get() == 1):&#xA;&#xA;&#xA;        if len(audioFrame)==self.audioFrameBytes:&#xA;            audioData=bytearray(audioFrame) &#xA;&#xA;            for j in range(int(round(self.audioFrameBytes/2))):&#xA;                sample = ((audioData[(j*2)&#x2B;1]&lt;&lt;8) | audioData[j*2]) &#x2B; 0x8000&#xA;                sample = (sample>>(16-self.outputAudioSampleBitDepth)) &amp; (0x0000FFFF>>(16-self.outputAudioSampleBitDepth))&#xA;&#xA;                audioData[j*2] = sample &amp; 0xFF&#xA;                audioData[(j*2)&#x2B;1] = sample>>8&#xA;&#xA;            output.write(audioData)&#xA;            audioFrame = audioPipe.stdout.read(self.audioFrameBytes)&#xA;&#xA;        else:&#xA;            emptySamples=[]&#xA;            for samples in range(int(round(self.audioFrameBytes/2))):&#xA;                emptySamples.append(0x00)&#xA;                emptySamples.append(0x00)&#xA;            output.write(bytearray(emptySamples))&#xA;&#xA;self.progressBarVar.set(100.0)&#xA;&#xA;vidPipe.terminate()&#xA;vidPipe.stdout.close()&#xA;vidPipe.wait()&#xA;&#xA;if (self.audioEnable.get() == 1):&#xA;    audioPipe.terminate()&#xA;    audioPipe.stdout.close()&#xA;    audioPipe.wait()&#xA;&#xA;output.close()&#xA;

    &#xA;

    The Rust snippet that should accomplish the same goals :

    &#xA;

    let output_file = OpenOptions::new()&#xA;    .create(true)&#xA;    .truncate(true)&#xA;    .write(true)&#xA;    .open(&amp;output_path)&#xA;    .unwrap();&#xA;let mut writer = BufWriter::with_capacity(&#xA;    options.video_frame_bytes.max(options.audio_frame_bytes),&#xA;    output_file,&#xA;);&#xA;let ffmpeg_path = sidecar_path("ffmpeg");&#xA;#[cfg(debug_assertions)]&#xA;let timer = Instant::now();&#xA;&#xA;let mut video_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;video_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "image2pipe",&#xA;    "-r", options.frame_rate,&#xA;    "-vf", options.scale,&#xA;    "-vcodec", "rawvideo",&#xA;    "-pix_fmt", "bgr565be",&#xA;    "-f", "rawvideo",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;// windows creation flag CREATE_NO_WINDOW: stops the process from creating a CMD window&#xA;// https://docs.microsoft.com/en-us/windows/win32/procthread/process-creation-flags&#xA;#[cfg(windows)]&#xA;video_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut video_child = video_cmd.spawn().unwrap();&#xA;let mut video_stdout = video_child.stdout.take().unwrap();&#xA;let mut video_frame = vec![0; options.video_frame_bytes];&#xA;&#xA;let mut audio_cmd = Command::new(&amp;ffmpeg_path);&#xA;#[rustfmt::skip]&#xA;audio_cmd.args([&#xA;    "-i", options.path,&#xA;    "-f", "s16le",&#xA;    "-acodec", "pcm_s16le",&#xA;    "-ar", options.sample_rate,&#xA;    "-ac", "1",&#xA;    "-",&#xA;])&#xA;.stdin(Stdio::null())&#xA;.stdout(Stdio::piped())&#xA;.stderr(Stdio::null());&#xA;&#xA;#[cfg(windows)]&#xA;audio_cmd.creation_flags(0x08000000);&#xA;&#xA;let mut audio_child = audio_cmd.spawn().unwrap();&#xA;let mut audio_stdout = audio_child.stdout.take().unwrap();&#xA;let mut audio_frame = vec![0; options.audio_frame_bytes];&#xA;&#xA;while video_stdout.read_exact(&amp;mut video_frame).is_ok() {&#xA;    writer.write_all(&amp;video_frame).unwrap();&#xA;&#xA;    if audio_stdout.read_to_end(&amp;mut audio_frame).is_ok() {&#xA;        if audio_frame.len() == options.audio_frame_bytes {&#xA;            for i in 0..options.audio_frame_bytes / 2 {&#xA;                let temp_sample = ((u32::from(audio_frame[(i * 2) &#x2B; 1]) &lt;&lt; 8)&#xA;                    | u32::from(audio_frame[i * 2]))&#xA;                    &#x2B; 0x8000;&#xA;                let sample = (temp_sample >> (16 - 10)) &amp; (0x0000FFFF >> (16 - 10));&#xA;&#xA;                audio_frame[i * 2] = (sample &amp; 0xFF) as u8;&#xA;                audio_frame[(i * 2) &#x2B; 1] = (sample >> 8) as u8;&#xA;            }&#xA;        } else {&#xA;            audio_frame.fill(0x00);&#xA;        }&#xA;    }&#xA;    writer.write_all(&amp;audio_frame).unwrap();&#xA;}&#xA;&#xA;&#xA;video_child.wait().unwrap();&#xA;audio_child.wait().unwrap();&#xA;&#xA;#[cfg(debug_assertions)]&#xA;{&#xA;    let elapsed = timer.elapsed();&#xA;    dbg!(elapsed);&#xA;}&#xA;&#xA;writer.flush().unwrap();&#xA;

    &#xA;

    I have looked at the hex data of the files using HxD - regardless of how I alter the Rust program, I am unable to get data different from what is previewed in the attached image - so the audio pipe is incorrectly interfaced. I included a screenshot of the hex data from the working python program that converts the video and audio correctly.

    &#xA;

    HxD Python program hex output :

    &#xA;

    HxD Python program hex output

    &#xA;

    HxD Rust program hex output :

    &#xA;

    HxD Rust program hex output

    &#xA;

  • video.js record timestamp blob is invalid after the first one [closed]

    10 octobre 2022, par Codengine

    I am building a video messaging service. I have used video.js and videojs record to record video and audio.

    &#xA;

    I have used timeSlice option of record plugin to 2 second so every 2 second I get blobs which I cans save in server and then merge later.

    &#xA;

    Everything is fine. I can get the blob and upload ins server.

    &#xA;

    But the issue is - only the first blob is correct and othr blobs I cannot concate altogether. I have tried ffmpeg and opencv both to merge the videos in backend. Both says all but the first blob is incorrect.

    &#xA;

    Need help on this. I have been strugging with this last few days with no solution yet.

    &#xA;