Recherche avancée

Médias (0)

Mot : - Tags -/alertes

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (32)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Qu’est ce qu’un éditorial

    21 juin 2013, par

    Ecrivez votre de point de vue dans un article. Celui-ci sera rangé dans une rubrique prévue à cet effet.
    Un éditorial est un article de type texte uniquement. Il a pour objectif de ranger les points de vue dans une rubrique dédiée. Un seul éditorial est placé à la une en page d’accueil. Pour consulter les précédents, consultez la rubrique dédiée.
    Vous pouvez personnaliser le formulaire de création d’un éditorial.
    Formulaire de création d’un éditorial Dans le cas d’un document de type éditorial, les (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7640)

  • OpenCV to ffplay from named pipe (fifo)

    13 juillet 2017, par Betsalel Williamson

    I’ve been working on piping video from OpenCV in C++. I’ve tried to pipe the image after processing from OpenCV to a named pipe with the end goal of republishing the stream using a webserver either with VLC or a NodeJS server.

    Where I’m stuck is that the output from OpenCV doesn’t seem to be processing correctly. The video always has artifacts even though it should be the raw video.

    enter image description here

    int main(int argc, char** argv)
    {

       VideoCapture camera(argv[1]);

       float fps = 15;

       // VLC raw video
       printf("Run command:\n\ncat /tmp/myfifo | cvlc --demux=rawvideo --rawvid-fps=%4.2f --rawvid-width=%.0f --rawvid-height=%.0f  --rawvid-chroma=RV24 - --sout \"#transcode{vcodec=h264,vb=200,fps=30,width=320,height=240}:std{access=http{mime=video/x-flv},mux=ffmpeg{mux=flv},dst=:8081/stream.flv}\""
           ,fps
           ,camera.get(CV_CAP_PROP_FRAME_WIDTH)
           ,camera.get(CV_CAP_PROP_FRAME_HEIGHT)
           );


       // ffplay raw video
       printf("Run command:\n\ncat /tmp/myfifo | ffplay -f rawvideo -pixel_format bgr24 -video_size %.0fx%.0f -framerate %4.2f -i pipe:"
           ,camera.get(CV_CAP_PROP_FRAME_WIDTH)
           ,camera.get(CV_CAP_PROP_FRAME_HEIGHT)
           ,fps
           );

       int fd;
       int status;

       char const * myFIFO = "/tmp/myfifo";

       if ((status = mkfifo(myFIFO, 0666)) < 0) {
           // printf("Fifo mkfifo error: %s\n", strerror(errno));
           // exit(EXIT_FAILURE);
       } else {
           cout << "Made a named pipe at: " << myFIFO << endl;
       }

       cout << "\n\nHit any key to continue after running one of the previously listed commands..." << endl;
       cin.get();

       if ((fd = open(myFIFO,O_WRONLY|O_NONBLOCK)) < 0) {
           printf("Fifo open error: %s\n", strerror(errno));
           exit(EXIT_FAILURE);
       }  

       while (true)
       {
           if (waitKey(1) > 0)
           {
               break;    
           }

           Mat colorImage;
           camera >> colorImage;

           // method: named pipe as matrix writes data to the named pipe, but image has glitch
           size_t bytes = colorImage.total() * colorImage.elemSize();

           if (write(fd, colorImage.data, bytes) < 0) {
               printf("Error in write: %s \n", strerror(errno));
           }            
       }

       close(fd);

       exit(EXIT_SUCCESS);
    }
  • Ffmpeg only receives a piece of information from the pipe

    4 juillet 2017, par Maxim Fedorov

    First of all - my english is not very good, i`m sorry for that.

    I use ffmpeg from c# to convert images to video. To interact with ffmpeg, I use pipes.

    public async Task ExecuteCommand(
           string arguments,
           Action<namedpipeserverstream> sendDataUsingPipe)
       {
           var inStream = new NamedPipeServerStream(
               "from_ffmpeg",
               PipeDirection.In,
               1,
               PipeTransmissionMode.Byte,
               PipeOptions.Asynchronous,
               PipeBufferSize,
               PipeBufferSize);

           var outStream = new NamedPipeServerStream(
               "to_ffmpeg",
               PipeDirection.Out,
               1,
               PipeTransmissionMode.Byte,
               PipeOptions.Asynchronous,
               PipeBufferSize,
               PipeBufferSize);

           var waitInConnectionTask = inStream.WaitForConnectionAsync();
           var waitOutConnectionTask = outStream.WaitForConnectionAsync();

           byte[] byteData;

           using (inStream)
           using (outStream)
           using (var inStreamReader = new StreamReader(inStream))
           using (var process = new Process())
           {
               process.StartInfo = new ProcessStartInfo
               {
                   RedirectStandardOutput = true,
                   RedirectStandardError = true,
                   RedirectStandardInput = true,
                   FileName = PathToFfmpeg,
                   Arguments = arguments,
                   UseShellExecute = false,
                   CreateNoWindow = true
               };

               process.Start();

               await waitOutConnectionTask;

               sendDataUsingPipe.Invoke(outStream);

               outStream.Disconnect();
               outStream.Close();

               await waitInConnectionTask;

               var logTask = Task.Run(() => process.StandardError.ReadToEnd());
               var dataBuf = ReadAll(inStream);

               var shouldBeEmpty = inStreamReader.ReadToEnd();
               if (!string.IsNullOrEmpty(shouldBeEmpty))
                   throw new Exception();

               var processExitTask = Task.Run(() => process.WaitForExit());
               await Task.WhenAny(logTask, processExitTask);
               var log = logTask.Result;

               byteData = dataBuf;

               process.Close();
               inStream.Disconnect();
               inStream.Close();
           }

           return byteData;
       }
    </namedpipeserverstream>

    Action "sendDataUsingPipe" looks like

    Action<namedpipeserverstream> sendDataUsingPipe = stream =>
           {
               foreach (var imageBytes in data)
               {
                   using (var image = Image.FromStream(new MemoryStream(imageBytes)))
                   {
                       image.Save(stream, ImageFormat.Jpeg);
                   }
               }
           };
    </namedpipeserverstream>

    When I send 10/20/30 images (regardless of the size) ffmpeg processes everything.
    When I needed to transfer 600/700 / .. images, then in the ffmpeg log I see that it only received 189-192, and in the video there are also only 189-192 images.
    There are no errors in the logs or exceptions in the code.

    What could be the reason for this behavior ?

  • Python : mp3 to alsaaudio through ffmpeg pipe and wave.open(f,'r')

    3 juillet 2017, par user2754098

    I’m trying to decode mp3 to wav using ffmpeg :

    import alsaaudio
    import wave
    from subprocess import Popen, PIPE

    with open('filename.mp3', 'rb') as infile:
       p=Popen(['ffmpeg', '-i', '-', '-f', 'wav', '-'], stdin=infile, stdout=PIPE)
       ...

    Next i want redirect data from p.stdout.read() to wave.open(file, r) to use readframes(n) and other methods. But i cannot because ’file’ in wave.open(file,’r’) can be only name of file or an open file pointer.

       ...
       file = wave.open(p.stdout.read(),'r')
       card='default'
       device=alsaaudio.PCM(card=card)
       device.setchannels(file.getnchannels())
       device.setrate(file.getframerate())
       device.setformat(alsaaudio.PCM_FORMAT_S16_LE)
       device.setsetperiodsize(320)
       data = file.readframes(320)
       while data:
           device.write(data)
           data = file.readframes(320)

    I got :

    TypeError: file() argument 1 must be encoded string without NULL bytes, not str

    So is it possible to handle data from p.stdout.read() by wave.open() ?
    Making temporary .wav file isn’t solution.

    Sorry for my english.
    Thanks.

    UPDATE

    Thanks to PM 2Ring for hit about io.BytesIO.

    However resulting code does not work.

    import alsaaudio
    import wave
    from subprocess import Popen, PIPE

    with open('sometrack.mp3', 'rb') as infile:
           p=Popen(['ffmpeg', '-i', '-', '-f','wav', '-'], stdin=infile , stdout=PIPE , stderr=PIPE)
           fobj = io.BytesIO(p.stdout.read())
    fwave = wave.open(fobj, 'rb')

    Trace :

    File "./script.py", line x, in <module>
     fwave = wave.open(fobj, 'rb')
    File "/usr/lib/python2.7/wave.py", line x, in open
     return Wave_read(f)
    File "/usr/lib/python2.7/wave.py", line x, in __init__
     self.initfp(f)
    File "/usr/lib/python2.7/wave.py", line x, in initfp
     raise Error, 'not a WAVE file'
    wave.Error: not a WAVE file
    </module>

    From /usr/lib/python2.7/wave.py :

    ...
    self._file = Chunk(file, bigendian = 0)
    if self._file.getname() != 'RIFF':
       raise Error, 'file does not start with RIFF id'
    if self._file.read(4) != 'WAVE':
       raise Error, 'not a WAVE file'
    ...

    Checking has been failed due to ’bad’ self._file object.

    Inside /usr/lib/python2.7/chunk.py i have found a source of problem :

    ...
    try:
       self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
    except struct.error:
       raise EOFError
    ...

    Because struct.unpack(strflag+’L’, file.read(4))[0] returns 0.
    But this function works correct.

    As specified here :

    "5-8 bytes - File size(integer)
    Size of the overall file - 8 bytes, in bytes (32-bit integer). Typically, you’d fill this in after creation."
    That’s why my script doesn’t work. wave.open and other functions cannot handle my file object because self.chunksize = 0. Looks like ffmpeg cannot insert File size when using PIPE.

    SOLUTION

    It’s simple.
    I’ve changed init function of Chunk class :

    After :

    ...
    try:
       self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
    except struct.error:
       raise EOFError
    ...

    Before :

    ...
    try:
       self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
       currtell = file.tell()
       if self.chunksize == 0:
           file.seek(0)
           file.read(currtell)
           self.chunksize = len(file.read())-4
           file.seek(0)
           file.read(currtell)
    except struct.error:
       raise EOFError
    ...

    Of course editing of original module is bad idia. So I’ve create custom forks for 2 classes Chunk and Wave_read.

    Working but unstable full code you can find here.

    Sorry for my awful english.

    Thanks.