Recherche avancée

Médias (0)

Mot : - Tags -/configuration

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (73)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • MediaSPIP en mode privé (Intranet)

    17 septembre 2013, par

    À partir de la version 0.3, un canal de MediaSPIP peut devenir privé, bloqué à toute personne non identifiée grâce au plugin "Intranet/extranet".
    Le plugin Intranet/extranet, lorsqu’il est activé, permet de bloquer l’accès au canal à tout visiteur non identifié, l’empêchant d’accéder au contenu en le redirigeant systématiquement vers le formulaire d’identification.
    Ce système peut être particulièrement utile pour certaines utilisations comme : Atelier de travail avec des enfants dont le contenu ne doit pas (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

Sur d’autres sites (10170)

  • node.js ffmpeg spawn child_process unexpected data output

    5 septembre 2021, par PLNR

    
I'm rather new to backend stuff, so please excuse, if my question is trivial.
    
For an Intranet project, I want to present a video element in a webpage (React, HLS.js Player).
    
The video sources are mpeg-ts streams delivered as udp multicast.
    
A small node.js / express server should handle the ffmpeg commands, to transcode the multicast to hls to display them in a browser.
    

    
Problem is the Output :
    
The output is emitted on stderr... even the process is working as expected.
    
Here is the respective code I wrote for transcoding so far :

    


    const express = require("express");
const { spawn, exec } = require("child_process");
const process = require("process")

let ls;

const app = express();

app.get('/cam/:source', (body) => {
    const cam = body.params.source;
    console.log(cam);

    let source = "udp://239.1.1.1:4444";
    let stream = "/var/www/html/streams/tmp/cam1.m3u8"


    stream = spawn("ffmpeg", ["-re", "-i", source, "-c:v", "libx264", "-crf", "21", "-preset", "veryfast", "-c:a", "aac", "-b:a", "128k", "-ac", "2", "-f", "hls", "-hls_list_size", "5", "-hls_flags", "delete_segments", stream], {detached: true});

    stream.stdout.on("data", data => {
        console.log(`stdout: ${data}`);
    });

    stream.stderr.on("data", data => {
        console.log(`stderr: ${data}`);
    });

    stream.on("error", error => {
        console.log(`error: ${error.message}`);
    });

    stream.on("close", code => {
        console.log(`child process exited with code ${code}`);
    });
})

app.listen(5000, ()=> {
    console.log('Listening');
})


    


    This is maybe only cosmetics, but it makes me wondering.
    
Here is the terminal output :

    


    [nodemon] starting `node server.js`
Listening
camera stream reloaded
stderr: ffmpeg version 4.3.2-0+deb11u1ubuntu1 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10 (Ubuntu 10.2.1-20ubuntu1)

  --shortend--


pid:  4206
stderr: frame=    8 fps=0.0 q=0.0 size=N/A time=00:00:00.46 bitrate=N/A speed=0.931x    
pid:  4206
stderr: frame=   21 fps= 21 q=26.0 size=N/A time=00:00:00.96 bitrate=N/A speed=0.95x    
pid:  4206
stderr: frame=   33 fps= 22 q=26.0 size=N/A time=00:00:01.49 bitrate=N/A speed=0.982x    
pid:  4206
stderr: frame=   46 fps= 23 q=26.0 size=N/A time=00:00:02.00 bitrate=N/A speed=0.989x    
pid:  4206
stderr: frame=   58 fps= 23 q=26.0 size=N/A time=00:00:02.49 bitrate=N/A speed=0.986x    
pid:  4206


    


    and so on...
    

    
Any helpful information would be highly appreciated !
    
Many thanks in advance

    


  • Moviepy audio is merging together in script. How do I fix it ?

    9 novembre 2023, par Corey4005

    I have a dataframe containing speech recordings and videofiles I want to merge. For example, here is what the dataframe looks like :

    


    speech_paths,vid_paths,start,stop,short_option, 
Recording.m4a,hr.mp4,00:11:11.520,00:11:22.800,N,
Recording2.m4a,hr.mp4,00:04:38.800,00:04:54.840,N, 
Recording3.m4a,hr.mp4,00:05:12.520,00:05:35.600,N, 
Recording4.m4a,hr.mp4,00:10:36.440,00:11:11.520,N,  


    


    My goal is to loop through this csv and combine each recording with the video. The start and stop stamps represent when I would like the audio from the video file to start. However, my recording in "speech paths" should be added to the beginning of each video before the audio from the video starts. I am essentially trying to create an audiofile that combines my voice with the audio file in the video at the beginning of each video. The video clip will also be extended at the beginning and will contain no audio, but will have video still playing. My voice will just be the start.

    


    Here is the code that does this :

    


    #get the directory containing the .csv
  df = pd.read_csv('./csv-data/clip-config.csv')
  speech = df['speech_paths']
  startstamps = df['start']
  endstamps = df['stop']
  videos = df['vid_paths']

  #create standard recording path
  record_path = 'C:/Users/Corey4005/Documents/Sound recordings'

  #current directory 
  cwd = os.getcwd()

  #video locations 
  videos_path = os.path.join(cwd, 'inputvideos')
  outputvideos_path = os.path.join(cwd, 'outputvideos')
  srt_path = os.path.join(cwd, 'srtfile')

  #a list to concatinate all of the clips into one video if df > 0
  clips_list = []

  count = 0
  #get name of filepath 
  for i in range(len(df)):
    count +=1

    #adding the name of the speech file to the variable
    speech_file = speech[i]

    #selecting the start and end stamps to download from yt
    start_stamp = startstamps[i]
    end_stamp = endstamps[i]

    #selecting the video file
    video_file = videos[i]

    #getting the video file 
    path_to_video = os.path.join(videos_path, video_file)
    path_to_mp3 = os.path.join(record_path, speech_file)

    print("----------- Progress: {} / {} videos processed -----------".format(count, len(df)))
    print("----------- Combining the Following Files: ")
    print("----------- Speech: {}".format(path_to_mp3))
    print("----------- Video: {}".format(path_to_video))

    #need the audio length to get the appropriate start time for the new clip
    audio_length = get_audio_length(path_to_mp3)

    print('----------- Writing mono speech file')
    #create an audio clip of the new audio that is now .mp3 and convert from stero to mono
    mp.AudioFileClip(path_to_mp3).write_audiofile('mono.mp3', ffmpeg_params=["-ac", "1"])
    

    #create the overall big clip that is the size of the audio + the video in question
    big_clip = clip_video(path_to_video, start_stamp, end_stamp, audio_length)

    #create the first clip the size of the speech file, or from 0 -> end of audio_length
    first_clip = big_clip.subclip(0, audio_length)

    #set first clip audio as speech file
    audioclip = mp.AudioFileClip("mono.mp3")
    first_clip.audio=audioclip
  
    #create a second clip the size of the rest of the file or from audio_length -> end
    second_clip = big_clip.subclip(audio_length)

    # Concatenate the two subclips
    final_clip = mp.concatenate_videoclips([first_clip, second_clip])

    if len(df)>1:
      
      #for youtube
      clips_list.append(final_clip)
      
    else:
      ytoutpath = os.path.join(outputvideos_path, 'youtube.mp4')

      print('----------- Writing combined speech and videofile')
      #youtube
      final_clip.write_videofile(ytoutpath)
      #yt filepath 

      ytfilepath = os.path.abspath(ytoutpath)


      #create subtitles filepath
      print("----------- generating srt file")
      transcribefile = video_to_srt(ytfilepath, srt_path)

      #create videos that are subtitles 
      print("----------- subtitiling youtube video")
      subtitledyt = create_subtitles(ytfilepath, transcribefile, 'yt', outputvideos_path)

      #resize the video for tt, resized is the filename
      print('----------- generating tiktok video')
      resized = resize(final_clip, count, outputvideos_path)
      
      print('----------- subtitling tiktokvideo')
      tiktoksubtitled = create_subtitles(resized, transcribefile, 'tt', outputvideos_path)

  if len(df)>1:
    #writing the finall clips list into a concatinated video
    print("----------- Concatinating all {} videos -----------".format(len(df)))
    concatinate_all = mp.concatenate_videoclips(clips_list)
    
    #creating paths to save videos to 
    ytoutpath = os.path.join(outputvideos_path, 'concat_youtube.mp4')

    #write out file for iphone
    concatinate_all.write_videofile(ytoutpath)


    


    Here are some other functions that are used in the main script I created, which will show the complete context :

    


    def get_audio_length(filepath: str)->float:
    print('----------- Retrieving audio length')
    seconds = librosa.get_duration(filename=filepath)
    print(f'----------- Seconds: {seconds}')
    return seconds

def clip_video(input_video: str, start_stamp: str, end_stamp: str, delta: float | None = None) -> mp.VideoFileClip:
  # Load the video.
  video = mp.VideoFileClip(input_video)

  #converting timestamp to seconds 
  if delta:
    start_stamp = convert_timestamp(start_stamp)-delta
    end_stamp = convert_timestamp(end_stamp)
    clip = video.subclip(start_stamp, end_stamp)

  else:
  # Clip the video.
    clip = video.subclip(convert_timestamp(start_stamp), convert_timestamp(end_stamp))
  
  return clip


def convert_timestamp(timestamp: str) -> float:
    
    # Split the timestamp on the `:` character.
    hours, minutes, seconds = timestamp.split(":")  
    seconds, ms = seconds.split('.')
    # Convert the time string to a timedelta object.
    timedelta_object = datetime.timedelta(hours=int(hours), minutes=int(minutes), seconds=int(seconds), milliseconds=int(ms))
    #convert to seconds 
    seconds = timedelta_object.total_seconds()
    return seconds


    


    My problem is that the Recording4.m4a, is bleeding into the last part of each of the recordings above it. I am not sure why this is happening, as I am creating a totally different "mono.mp3" file each time. Essentially, this file is a mono instead of stero version of the "speech" file I am adding to the front of each video.

    


    How do I stop the final recording from bleeding into the others ? This basically means that each of my audio files start with the correct sound, but then about halfway through the fourth recording interrupts and starts. I feel like I am missing some understanding of how moviepy works.

    


  • x86/vvcdec : add dmvr avx2 code

    25 juillet 2024, par Nuo Mi
    x86/vvcdec : add dmvr avx2 code
    

    Decoder-Side Motion Vector Refinement is about 4 8% CPU usage for some clips

    here is the test result for one time
    clips | before| after | delta


    |-------|-------|------
    RitualDance_1920x1080_60_10_420_37_RA.266 | 338.7 | 354.3 |4.61%
    NovosobornayaSquare_1920x1080.bin | 320.3 | 329.3 |2.81%
    Tango2_3840x2160_60_10_420_27_LD.266 | 83.3 | 83.7 |0.48%
    RitualDance_1920x1080_60_10_420_32_LD.266 | 320.7 | 327.3 |2.06%
    Chimera_8bit_1080P_1000_frames.vvc | 360.7 | 381.0 |5.63%
    BQTerrace_1920x1080_60_10_420_22_RA.vvc | 161.7 | 163.0 |0.80%

    • [DH] libavcodec/x86/vvc/Makefile
    • [DH] libavcodec/x86/vvc/vvc_dmvr.asm
    • [DH] libavcodec/x86/vvc/vvcdsp_init.c