
Recherche avancée
Médias (1)
-
SWFUpload Process
6 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
Autres articles (69)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (9268)
-
How to pipe Picamera video to FFMPEG with subprocess (Python)
31 juillet 2017, par VeniVidiReliquiI see a ton of info about piping a raspivid stream directly to FFMPEG for encoding, muxing, and restreaming but these use cases are mostly from bash ; similar to :
raspivid -n -w 480 -h 320 -b 300000 -fps 15 -t 0 -o - | ffmpeg -i - -f mpegts udp ://192.168.1.2:8090ffmpeg
I’m hoping to utilize the functionality of the Picamera library so I can do concurrent processing with OpenCV and similar while still streaming with FFMPEG. But I can’t figure out how to properly open FFMPEG as subprocess and pipe video data to it. I have seen plenty of attempts, unanswered posts, and people claiming to have done it, but none of it seems to work on my Pi.
Should I create a video buffer with Picamera and pipe that raw video to FFMPEG ? Can I use camera.capture_continuous() and pass FFMPEG the bgr24 images I’m using for my OpenCV calculation ?
I’ve tried all sorts of variations and I’m not sure if I’m just misunderstanding how to use the subprocess module, FFMPEG, or I’m simply missing a few settings. I understand the raw stream won’t have any metadata, but I’m not completely sure what settings I need to give FFMPEG for it to understand what I’m giving it.
I have a Wowza server I’ll eventually be streaming to, but I’m currently testing by streaming to a VLC server on my laptop. I’ve currently tried this :
import subprocess as sp
import picamera
import picamera.array
import numpy as np
npimage = np.empty(
(480, 640, 3),
dtype=np.uint8)
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_recording('/dev/null', format='h264')
command = [
'ffmpeg',
'-y',
'-f', 'rawvideo',
'-video_size', '640x480',
'-pix_fmt', 'bgr24',
'-framerate', '24',
'-an',
'-i', '-',
'-f', 'mpegts', 'udp://192.168.1.54:1234']
pipe = sp.Popen(command, stdin=sp.PIPE,
stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10**8)
if pipe.returncode != 0:
output, error = pipe.communicate()
print('Pipe failed: %d %s %s' % (pipe.returncode, output, error))
raise sp.CalledProcessError(pipe.returncode, command)
while True:
camera.wait_recording(0)
for i, image in enumerate(
camera.capture_continuous(
npimage,
format='bgr24',
use_video_port=True)):
pipe.stdout.write(npimage.tostring())
camera.stop_recording()I’ve also tried writing the stream to a file-like object that simply creates the FFMPEG subprocess and writes to the stdin of it (camera.start_recording() can be given an object like this when you initialize the picam) :
class PipeClass():
"""Start pipes and load ffmpeg."""
def __init__(self):
"""Create FFMPEG subprocess."""
self.size = 0
command = [
'ffmpeg',
'-f', 'rawvideo',
'-s', '640x480',
'-r', '24',
'-i', '-',
'-an',
'-f', 'mpegts', 'udp://192.168.1.54:1234']
self.pipe = sp.Popen(command, stdin=sp.PIPE,
stdout=sp.PIPE, stderr=sp.PIPE)
if self.pipe.returncode != 0:
raise sp.CalledProcessError(self.pipe.returncode, command)
def write(self, s):
"""Write to the pipe."""
self.pipe.stdin.write(s)
def flush(self):
"""Flush pipe."""
print("Flushed")
usage:
(...)
with picamera.PiCamera() as camera:
p = PipeClass()
camera.start_recording(p, format='h264')
(...)Any assistance with this would be amazing !
-
How to pipe HLS segment to numpy array with FFmpeg
22 novembre 2022, par urico12I'm trying to extract a clip from an HLS stream and pipe the output to a numpy array. When I set the ffmpeg process to output to an mp4 file, the video looks exactly as I expect it to. However, when I output to a pipe so that I can add the data to a numpy array, the resulting video's display is off.


I'm running this ffmpeg command as a subprocess in python. I retrieve the output and send it to videowriter to create the mp4. I'm only doing the second part at the moment to test that the video data is correct.


clip_command = f"ffmpeg -y -live_start_index 0 -i master.m3u8 -ss {start} -t {duration} -pix_fmt rgb24 -f rawvideo pipe:1"
generate_clip = Popen(clip_command, shell=True, stdout=PIPE, stderr=STDOUT, bufsize=10**8)

nb_img = 300*300*3
out = cv2.VideoWriter('output.mp4', cv2.VideoWriter_fourcc(*'mp4v'), 15, (300, 300))
while True:
 buffer = generate_clip.stdout.read(nb_img)
 if len(buffer) != nb_img:
 # last frame
 break
 img = np.frombuffer(buffer, dtype='uint8').reshape(300, 300, 3)
 out.write(img)
out.release()



When I view the video that is generated frm cv2.VideoWriter, it looks something like
this.
The colors are changed from the source video and the frame repeats near the edges. I've tried setting different values for -pix_fmt but nothing seems to help. I'm not sure what I'm doing wrong or if what I'm trying to do is even feasible with Ffmpeg because I haven't been able to find much information on this particular use case.


-
How to pipe to ffmpeg RGB value 10 ?
4 juillet 2021, par Milo HigginsI am trying to create a video file using ffmpeg. I have all the RGB pixel data for each frame, and following this blogpost I have code which sends the data frame by frame via a pipe. And it works mostly. However if any pixel has a value of 10 in any of the 3 channels (e.g. #00000A, #0AFFFF, etc) then it produces these errors :


[rawvideo @ 0000020c3787f040] Packet corrupt (stream = 0, dts = 170) 
pipe:: corrupt input packet in stream 0
[rawvideo @ 0000020c3789f100] Invalid buffer size, packet size 32768 < expected frame_size 49152
Error while decoding stream #0:0: Invalid argument



And the output video is garbled.
Now I suspect because 10 in ASCII is newline character, that this is confusing the pipe somehow.
What exactly is happening here and how do I fix it so that I can use RGB values like #00000a ?


Below is the C code which is an example of this


#include 

 unsigned char frame[128][128][3];

 int main() {
 
 int x, y, i;
 FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 128x128 -r 24 -i - -f mp4 -q:v 1 -an -vcodec mpeg4 output.mp4", "w");
 
 for (i = 0; i < 128; i++) {
 for (x = 0; x < 128; ++x) {
 for (y = 0; y < 128; ++y) {
 frame[y][x][0] = 0;
 frame[y][x][1] = 0;
 frame[y][x][2] = 10;
 } 
 }
 fwrite(frame, 1, 128*128*3, pipeout);
 } 
 
 fflush(pipeout);
 pclose(pipeout);
 return 0;
 }



EDIT : for clarity, I am using Windows