
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (110)
-
Gestion de la ferme
2 mars 2010, parLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation" -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (16555)
-
draw a rectangle on YUV420p frame with python opencv
18 janvier 2021, par user3705497I have a python script sandwiched between ffmpeg input and output commands
logically looks like the following :


ffmpeg -i webcam -vf format=yuv420p -f rawvideo - | python below.py | ffmpeg -f rawvideo -video_size 640x480 -i - -f sometype some_output 



Below is the python script snippet :


import cv2 as cv
import sys
import subprocess as sp
import numpy as np

if sys.platform == "win32":
 import os, msvcrt
 msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)

width = 640
height = 480

FFMPEG_BIN = 'ffmpeg'
command = [ FFMPEG_BIN,
 '-re',
 '-f', 'dshow',
 '-rtbufsize', '200M',
 '-i', 'video=USB2.0 VGA UVC WebCam',
 '-vf', 'format=yuv420p',
 '-f', 'rawvideo', '-']
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8) 

while True:
 raw_image = pipe.stdout.read(width*height*3)
 frame = np.frombuffer(raw_image, dtype='uint8')

 frame = frame.reshape((height,width,3)) 

 # cv.rectangle(frame, (xmin, ymin), (xmax, ymax), color=(0, 255, 0), thickness=1)

 sys.stdout.buffer.write( frame.tostring() )



I want to draw a rectangle on the YUV frame. I can't seem to get the right color (should be green - (0, 255, 0))


My understanding, so far, is to break down the frame into 3 pieces Y, U, V and rejoin them. Not sure if this is correct and Which layer should I call cv.rectangle on ?


-
Pipe video frames from ffmpeg to canvas without loading the entire video into memory
1er janvier 2024, par AviatoI am working on a project that involves frame manipulation and I decided to choose node canvas API for that. I used to work with OpenCV Python and there was a
cv2.VideoCapture
class that takes a video as input and prepares to read the frames of the video and we can loop through the frames one at a time without having to load all the frames at once in memory.
Now I tried a lot of ways to replicate the same using ffmpeg, i.e. trying to load frames from a video in an ordered, but "on-demand," fashion.

I tried using ffmpeg as a child process to process frames and standout the frames.


const spawnProcess = require('child_process').spawn,
 ffmpeg = spawnProcess('ffmpeg', [
 '-i', 'test.mp4',
 '-vcodec', 'png',
 '-f', 'rawvideo',
 '-s', '1920*1080', // size of one frame
 'pipe:1'
 ]);
ffmpeg.stdout.on('data', (data) => {
 try {
 // console.log(tf.node.decodeImage(data).shape)
 console.log(`${++i} frames read`)
 //context.drawImage(data, 0, 0, width, height)
 
 
 } catch(e) {
 console.log(e)
 } 
})



The value in the console shows something around 4000 + console logs, but the video only had 150 frames, after much investigating and console logging the data, I found that was buffer data, and it's not processing it for each frame. The on-data function returns the buffer data in an unstructured way
I want to read frames from a video and process each one at a time in memory, I don't want to hold all the frames at once in memory or in the filesystem.




I also want to pipe the frames in a format that could be rendered on top of a canvas using drawImage


-
How to encode a series of .dpx files using X264
4 juin 2015, par user3759710I am complete newbie to video encoding. I am trying to encode a series of .dpx files into one single encoded video O/P file in any of the following format. ( .mp4,.avi,.h264,.mkv etc)
I have tried 2 different approaches. The first one works and the second one does not.
I would like to know the difference between the two. Any help / input would be much appreciated.1) using FFMPEG along with x264 library and it works well. I am able to produce desired output
ffmpeg -start_number 0 -i frame%4d.dpx -pix_fmt yuv420p -c:v libx264 -crf 28
-profile:v baseline fromdpx.h2642) I first try to concatenate all the dpx files into a single file using concate protocol in ffmpeg and then use x264 to encode the concatenated file.
Here I see that the size of the concatenated file is the sum of all the files concatenated. But when I use x264 command to encode the concatenated file, I get a green screen (basically not the desired output) .ffmpeg -i "concat:frame0.dpx|frame01.dpx|frame2.dpx etc" -c copy output.dpx
then
x264 --crf 28 --profile baseline -o encoded.mp4 --input-res 1920x1080 --demuxer raw
output.dpxI also tried to encoded the concatenated file using ffmpeg as follows
ffmpeg -i output.dpx -pix_fmt yuv420p -c:v libx264 -crf 28 -profile:v baseline fromdpx.h264
This also gives me a blank video.
Could someone please point out to me what is going on here ? Why does the first method work and the second does not ?
Thank you.