Recherche avancée

Médias (1)

Mot : - Tags -/censure

Autres articles (96)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

Sur d’autres sites (8242)

  • Stream webm to node.js from c# application in chunks

    29 mai 2018, par Dan-Levi Tømta

    I am in the process of learning about streaming between node.js with socket.io and c#.

    I have code that successfully records the screen with ffmpeg, redirects it StandardOutput.BaseStream and stores it into a Memorybuffer, when i click stop in my application it sends the memorystream as a byte array to the node.js server which are storing the file so the clients can play it. This are working just fine and here are my setup for that :

    C#

    bool ffWorkerIsWorking = false;
    private void btnFFMpeg_Click(object sender, RoutedEventArgs e)
    {
       BackgroundWorker ffWorker = new BackgroundWorker();
       ffWorker.WorkerSupportsCancellation = true;
       ffWorker.DoWork += ((ffWorkerObj,ffWorkerEventArgs) =>
       {
           ffWorkerIsWorking = true;
           using (var FFProcess = new Process())
           {
               var processStartInfo = new ProcessStartInfo
               {
                   FileName = "ffmpeg.exe",
                   RedirectStandardInput = true,
                   RedirectStandardOutput = true,
                   UseShellExecute = false,
                   CreateNoWindow = false,
                   Arguments = " -loglevel panic -hide_banner -y -f gdigrab -draw_mouse 1 -i desktop -threads 2 -deadline realtime  -f webm -"
               };
               FFProcess.StartInfo = processStartInfo;
               FFProcess.Start();

               byte[] buffer = new byte[32768];
               using (MemoryStream ms = new MemoryStream())
               {
                   while (!FFProcess.HasExited)
                   {
                       int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
                       if (read <= 0)
                           break;
                       ms.Write(buffer, 0, read);
                       Console.WriteLine(ms.Length);
                       if (!ffWorkerIsWorking)
                       {                                
                           clientSocket.Emit("video", ms.ToArray());                                
                           ffWorker.CancelAsync();
                           break;
                       }
                   }
               }
           }
       });
       ffWorker.RunWorkerAsync();
    }

    JS (Server)

    socket.on('video', function(data) {
       fs.appendFile('public/fooTest.webm', data, function (err) {
         if (err) throw err;
         console.log('File uploaded');
       });
    });

    Now i need to change this code so it instead of sending the whole file it should sends chunks of byte arrays instead of the whole video, and node will then initially create a file and then append those chunks of byte arrays as they are received. Ok sound easy enough, but apparently not.

    I need to somehow instruct the code to use a offset and just the bytes after that offset and then update the offset.

    On the server side i think the best approach is to create a file and append the byte arrays to that file as they are received.

    On the server side i would do something like this :

    JS (Server)

    var buffer = new Buffer(32768);
    var isBuffering = false;
    socket.on('video', function(data) {
       //concatenate the buffer with the incoming data and broadcast.emit to clients

    });

    How am i able to setup the offset for the bytes to be sent and update that offset, and how would i approach the way of concatenating the data to the initialized buffer ?

    I have tried to write some code that only reads from the offset to the end and it seems like this is working although the video when added up in node is just black :

    C#

    while (!FFProcess.HasExited)
    {
       int read = FFProcess.StandardOutput.BaseStream.Read(buffer, 0, buffer.Length);
       if (read <= 0)
           break;
       int offset = (read - buffer.Length > 0 ? read - buffer.Length : 0);
       ms.Write(buffer, offset, read);
       clientSocket.Emit("videoChunk", buffer.ToArray());
       if (!ffWorkerIsWorking)
       {                                
           ffWorker.CancelAsync();
           break;
       }
    }

    Node console output

    Bytes read

    JS (Server)

    socket.on('videoChunk', function(data) {
       if (!isBufferingDone) {
           buffer = Buffer.concat([buffer, data]);
           console.log(data.length);
       }
    });

    socket.on('cancelVideo', function() {
       isBufferingDone = true;
       setTimeout(function() {
           fs.writeFile("public/test.webm", buffer, function(err) {
               if(err) {
                   return console.log(err);
               }
               console.log("The file was saved!");
               buffer = new Buffer(32768);
           });
       }, 1000);
    });

    JS (Client)

    socket.on('video', function(filePath) {
      console.log('path: ' + filePath);
      $('#videoSource').attr('src',filePath);
      $('#video').play();
    });

    Thanks !

  • EAGLContexts sharing EAGLSharegroup giving error when subclassing GLKView

    27 août 2015, par Pawan Yadav

    i am trying to create a player using FFMPEG which can display frame using OpenGl. The player class has three threads : one for rendering (Runs a runloop and handles a render event triggered every N ms.) - it fetches GLKTextureInfo stored in pictureQueue and renders.
    one for reading packets from VideoStream and putting them in a videoQueue, the third one fetches the packets from the videoQueue and decodes them and creates a GLKTextureInfo and stores it in pictureQueue.

    Case 1 :
    The player class subclasses GLKView and creates a EAGLContext sets it as its context and also as currentContext in rendering thread (it’s the first thread that starts).

    EAGLContext *mycontext = [self createBestEaglContext];
    if (!self.mycontext || ![EAGLContext setCurrentContext:mycontext]) {
       NSLog(@"Could not create Base EAGLContext");
       return;
    }
    [self setContext:mycontext];

    and then starts the stream decoding thread which in turn starts the Video Packet Decoding Thread if it finds a video stream.then

    // set's the params for the GLKBaseEffect
    // set's up VBO's
    // run's runloop

    The Video Packet Decoding Thread also creates EAGLContext which shares the earlier created contexts EAGLSharegroup.

    self.videoPacketDecodeThreadContext = [self createEaglContextForOtherThread];
    if (!self.videoPacketDecodeThreadContext || ![EAGLContext setCurrentContext:self.videoPacketDecodeThreadContext])
    {
       NSLog(@"Could not create video packet decode thread context");
    }

    texture part

    UIImage* image = [self ImageFromAVPicture:*(AVPicture*)pFrameRGB width:self.is.video_stream->codec->width height:self.is.video_stream->codec->height];

    NSError *error = nil;
    GLKTextureInfo *textureInfo = [GLKTextureLoader textureWithCGImage:image.CGImage
                                                                  options:nil
                                                                    error:&error];
    if (error)
    {
      NSLog(@"Texture loading Error: %@\n", error.description);
      //return -1;
    }
    else
    {
      [self.is.pictQueue_lock lock];
      [self.is.pictQueue enqueue:textureInfo];
      [self.is.pictQueue_lock unlock];
    }

    i get a error saying : Failed to bind EAGLDrawable:  to GL_RENDERBUFFER 1 and Failed to make complete framebuffer object 8cd6 and glerror 1280.

    Case 2 : The Player doesn’t subclass GLKView instead it is set as a delegate to the GLKView created in storyboard.

    -(void)initPlayerWithView:(GLKView*)v
    {
      self.view = v;
    }

    and set everyThing as above (sets self.view context to mycontext) everything runs fine.

    -(void)drawRect:(CGRect)rect And -(void)glkView:(GLKView *)view drawInRect:(CGRect)rect are both called on Rendering Thread. Rendering code :

       {
         [self.is.pictQueue_lock lock];
         GLKTextureInfo *textureInfo = (GLKTextureInfo*)[self.is.pictQueue dequeue];
         [self.is.pictQueue_lock unlock];

         // delete the previous texture
         GLuint index = self.baseEffect.texture2d0.name;
         glDeleteTextures(1, &index);

         self.baseEffect.texture2d0.name = textureInfo.name;
         self.baseEffect.texture2d0.target = textureInfo.target;

         [self.baseEffect prepareToDraw];
         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

         // Enable vertex buffer
         glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0);
         glEnableVertexAttribArray(GLKVertexAttribPosition);

         //Enable texture buffer
         glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex, textureCoords));
         glEnableVertexAttribArray(GLKVertexAttribTexCoord0);

         glDrawElements(GL_TRIANGLES, sizeof(Indices)/sizeof(Indices[0]), GL_UNSIGNED_BYTE, 0);
       }

    How do i resolve the error in Case 1 ?. Also if their are other things i can do in a different way please suggest like :

    1. Am i using to many threads ?
    2. I am converting the decoded frame to a UIImage and then creating a texture from it. can it be done differently ?
  • avcodec/refstruct : Add simple API for refcounted objects

    4 août 2022, par Andreas Rheinhardt
    avcodec/refstruct : Add simple API for refcounted objects
    

    For now, this API is supposed to replace all the internal uses
    of reference counted objects in libavcodec ; "internal" here
    means that the object is created in libavcodec and is never
    put directly in the hands of anyone outside of it.

    It is intended to be made public eventually, but for now
    I enjoy the ability to modify it freely.

    Several shortcomings of the AVBuffer API motivated this API :
    a) The unnecessary allocations (and ensuing error checks)
    when using the API. Besides the need for runtime checks it
    imposes upon the developer the burden of thinking through
    what happens in case an error happens. Furthermore, these
    error paths are typically not covered by FATE.
    b) The AVBuffer API is designed with buffers and not with
    objects in mind : The type for the actual buffers used
    is uint8_t* ; it pretends to be able to make buffers
    writable, but this is wrong in case the buffer is not a POD.
    Another instance of this thinking is the lack of a reset
    callback in the AVBufferPool API.
    c) The AVBuffer API incurs unnecessary indirections by
    going through the AVBufferRef.data pointer. In case the user
    tries to avoid this indirection and stores a pointer to
    AVBuffer.data separately (which also allows to use the correct
    type), the user has to keep these two pointers in sync
    in case they can change (and in any case has two pointers
    occupying space in the containing context). See the following
    commit using this API for H.264 parameter sets for an example
    of the removal of such syncing code as well as the casts
    involved in the parts where only the AVBufferRef* pointer
    was stored.
    d) Given that the AVBuffer API allows custom allocators,
    creating refcounted objects with dedicated free functions
    often involves a lot of boilerplate like this :
    obj = av_mallocz(sizeof(*obj)) ;
    ref = av_buffer_create((uint8_t*)obj, sizeof(*obj), free_func, opaque, 0) ;
    if (!ref)
    av_free(obj) ;
    return AVERROR(ENOMEM) ;

    (There is also a corresponding av_free() at the end of free_func().)
    This is now just
    obj = ff_refstruct_alloc_ext(sizeof(*obj), 0, opaque, free_func) ;
    if (!obj)
    return AVERROR(ENOMEM) ;
    See the subsequent patch for the framepool (i.e. get_buffer.c)
    for an example.

    This API does things differently ; it is designed to be lightweight*
    as well as geared to the common case where the allocator of the
    underlying object does not matter as long as it is big enough and
    suitably aligned. This allows to allocate the user data together
    with the API's bookkeeping data which avoids an allocation as well
    as the need for separate pointers to the user data and the API's
    bookkeeping data. This entails that the actual allocation of the
    object is performed by RefStruct, not the user. This is responsible
    for avoiding the boilerplate code mentioned in d).

    As a downside, custom allocators are not supported, but it will
    become apparent in subsequent commits that there are enough
    usecases to make it worthwhile.

    Another advantage of this API is that one only needs to include
    the relevant header if one uses the API and not when one includes
    the header or some other component that uses it. This is because there
    is no RefStruct type analog of AVBufferRef. This brings with it
    one further downside : It is not apparent from the pointer itself
    whether the underlying object is managed by the RefStruct API
    or whether this pointer is a reference to it (or merely a pointer
    to it).

    Finally, this API supports const-qualified opaque pointees ;
    this will allow to avoid casting const away by the CBS code.

    * : Basically the only exception to the you-only-pay-for-what-you-use
    rule is that it always uses atomics for the refcount.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavcodec/Makefile
    • [DH] libavcodec/refstruct.c
    • [DH] libavcodec/refstruct.h