Recherche avancée

Médias (91)

Autres articles (102)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (9220)

  • pipe compressed stream to another process using ffmpeg subprocess

    29 août 2021, par Naseer Ahmed

    I want to get compressed data from camera and send it to client side before converting it to frames. I have created a dummy code. but i am getting error that cannot reshape array of 0 byte

    


    import cv2
import subprocess as sp
import numpy

IMG_W = 640
IMG_H = 480

FFMPEG_BIN = "/usr/bin/ffmpeg"
ffmpeg_cmd = [ FFMPEG_BIN,
            '-i', 'h264.h264',
            '-vcodec', 'h264',              # disable audio processing
            '-f', 'image2pipe', '-']   

ffmpeg_cmd2 = [ FFMPEG_BIN,
            '-i','image2pipe',
            '-r', '30',                 # FPS
            '-pix_fmt', 'bgr24',        # opencv requires bgr24 pixel format.
            '-vcodec', 'rawvideo',
            '-an','-sn',                # disable audio processing
            '-f', 'image2pipe', '-'] 

 
pipe = sp.Popen(ffmpeg_cmd, stdout = sp.PIPE, bufsize=10)
 
pipe2 = sp.Popen(ffmpeg_cmd2, stdin = pipe.stdout,stdout = sp.PIPE, bufsize=10)

while True:
    raw_image = pipe2.stdout.read(IMG_W*IMG_H*3)
    image =  numpy.fromstring(raw_image, dtype='uint8')     # convert read bytes to np
    image = image.reshape((IMG_H,IMG_W,3))

    cv2.imshow('Video', image)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

pipe.stdout.flush()
cv2.destroyAllWindows()


    


  • Pipe OpenCV and PyAudio to ffmpeg streaming youtube rtmp from python

    22 août 2021, par a0910115172

    How can I pipe openCV and PyAudio to ffmpeg streaming youtube rtmp from python.
The error message shows as following :
No such filter : 'pipe:1'
pipe:1 : Invalid argument

    


    Here is my code :

    


    Import module

    


    import cv2
import subprocess
import pyaudio


    


    Audio

    


    p = pyaudio.PyAudio()
info = p.get_host_api_info_by_index(0)
numdevices = info.get('deviceCount')
for i in range(0, numdevices):
        if (p.get_device_info_by_host_api_device_index(0, i).get('maxInputChannels')) > 0:
            print("Input Device id ", i, " - ", p.get_device_info_by_host_api_device_index(0, i).get('name'))

CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 44100

stream = p.open(format=FORMAT,
                channels=CHANNELS,
                rate=RATE,
                input=True,
                frames_per_buffer=CHUNK)

# Stream Audio data here
# data = stream.read(CHUNK)


    


    Video

    


    rtmp = r'rtmp://a.rtmp.youtube.com/live2/key'

cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = 30



    


    command param here

    


    command = ['ffmpeg',
           '-y',
           '-f', 'rawvideo',
           '-pixel_format', 'bgr24',
           '-video_size', "{}x{}".format(width, height),
           '-framerate', str(fps),
           '-i', 'pipe:0',
           '-re',
           '-f', 'lavfi',
           '-i', 'pipe:1',
           '-c:v', 'libx264',
           '-c:a', 'aac',
           '-vf', 'format=yuv420p',
           '-f', 'flv',
           rtmp]


    


    Create subprocess to ffmpeg command

    


    pipe = subprocess.Popen(command, shell=False, stdin=subprocess.PIPE
)
while cap.isOpened():
    success, frame = cap.read()
    if success:
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
        pipe.stdin.write(frame.tostring())
        pipe.stdin.write(stream.read(CHUNK))


    


    Audio Stop

    


    stream.stop_stream()
stream.close()
p.terminate()


    


    Video Stop

    


    cap.release()
pipe.terminate()


    


    Thanks

    


  • How to pipe to ffmpeg RGB value 10 ?

    4 juillet 2021, par Milo Higgins

    I am trying to create a video file using ffmpeg. I have all the RGB pixel data for each frame, and following this blogpost I have code which sends the data frame by frame via a pipe. And it works mostly. However if any pixel has a value of 10 in any of the 3 channels (e.g. #00000A, #0AFFFF, etc) then it produces these errors :

    


    [rawvideo @ 0000020c3787f040] Packet corrupt (stream = 0, dts = 170) 
pipe:: corrupt input packet in stream 0
[rawvideo @ 0000020c3789f100] Invalid buffer size, packet size 32768 < expected frame_size 49152
Error while decoding stream #0:0: Invalid argument


    


    And the output video is garbled.
Now I suspect because 10 in ASCII is newline character, that this is confusing the pipe somehow.
What exactly is happening here and how do I fix it so that I can use RGB values like #00000a ?

    


    Below is the C code which is an example of this

    


        #include 

    unsigned char frame[128][128][3];

    int main() {
    
        int x, y, i;
        FILE *pipeout = popen("ffmpeg -y -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 128x128 -r 24 -i - -f mp4 -q:v 1 -an -vcodec mpeg4 output.mp4", "w");
    
        for (i = 0; i < 128; i++) {
            for (x = 0; x < 128; ++x) {
                for (y = 0; y < 128; ++y) {
                    frame[y][x][0] = 0;
                    frame[y][x][1] = 0;
                    frame[y][x][2] = 10;
                } 
            }
            fwrite(frame, 1, 128*128*3, pipeout);
        } 
    
        fflush(pipeout);
        pclose(pipeout);
        return 0;
    }


    


    EDIT : for clarity, I am using Windows