Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (95)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (13061)

  • Processing video frame by frame in AWS Lambda with Node.js and FFmpeg [closed]

    29 décembre 2023, par Aviato

    I am working on a project where I need to process video frames one at a time in an AWS Lambda function using Node.js. My goal is to avoid storing all frames in memory or the filesystem due to resource constraints. I plan to use the fluent-ffmpeg library or ffmpeg from child processes for video processing.

    


    In the past, I used OpenCV to process videos and frames without writing the frames on the disk or storing all the frames at once on the memory itself. But now as I am using node js, its a little hard to set up the code using ffmpeg, etc.

    


    Here is a small snippet from what I did with opencv :-

    


    import cv2

cap = cv2.VideoCapture(video_file)

out = cv2.VideoWriter('output.mp4', fourcc, fps, (width, height))

def generate_frame():
        while cap.isOpened():
            code, frame = cap.read()
            if code:
                yield frame
            else:
                print("completed")
                break

for i, frame in enumerate(generate_frame()):
          # Now we can process the video frames directly and write them on the output opencv
          out.write(editing_frames)


    


    Additionally, I intend to leverage image processing libraries like Sharp and the Canvas API to edit individual frames before assembling the final video. I am looking for help in handling video frames efficiently within the constraints of AWS Lambda.

    


    Any insights, code snippets, or recommendations would be greatly appreciated. Thank you !

    


  • Is there a faster way to generate video from pixel arrays using python and ffmpeg ?

    8 mai 2019, par devneal17

    I’ve found a few sources which use python and ffmpeg to generate video from pixel arrays by passing the -f rawvideo flag 1 2. However, this is very slow for high-definition video since each individual pixel must be piped into ffmpeg.

    In fact this is provably wasteful, as I’ve found that 2.5Gb of pixel arrays generates about 80Kb of video. I’ve also chanced upon some examples where javascript can render high quality animations in near-real time 1, which makes me even more suspicious that I’m doing something wrong.

    Is there a way to do this more efficiently, perhaps by piping the differences between pixel arrays into ffmpeg rather than the pixels themselves ?

    (edit) This is the line I’m using. Most executions take the else path that follows.

  • FFmpeg : Concatenated clips extracted using -ss and -t, hang or go out of sync while playing

    20 juillet 2013, par Yogi

    I have a set of movies in different formats, and I am trying to extract small clips from these source movies, and concatenate them into one movie.

    My workflow has been the following :

    1. Convert all the source movies to the same format (width, height, fps, codec)T the scale and pad options are so that all movies oare of the same size, even if their aspect ratio is different.

      fmpeg  -i $infile -vcodec libx264 -strict -2  -vf scale=iw*sar*min(${MAX_WIDTH}/(iw*sar)\,${MAX_HEIGHT}/ih):ih*min(${MAX_WIDTH}/(iw*sar)\,${MAX_HEIGHT}/ih),pad=${MAX_WIDTH}:${MAX_HEIGHT}:(ow-iw)/2:(oh-ih)/2 -b:v 500k -b:a 64k -movflags +faststart -g 10  -r 25 ${outbasename}.mp4

    2. extract clips :

      ffmpeg -ss $starttime -t $duration -i $in_file -vcodec copy -acodec copy $out_file

    3. Then finally combine clips by first making a concat_list.txt file which contains the list of clips to be concatenated, and their duration, and then using ffmpegs's concat demux :

      ffmpeg -f concat -i concat_list.txt -c copy -movflags +faststart $oname

    The problem I am facing is that many of the final videos hang or go out of sync, somewhere in the middle of playing. I have tried using mjpeg as the codec, but I still get the same behavior. I can play the individual extracted clips, and then all seem to play fine in most players. Does anybody know what I am doing wrong ? I am using ffmpeg version 1.2.1.