Recherche avancée

Médias (1)

Mot : - Tags -/illustrator

Autres articles (62)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Pas question de marché, de cloud etc...

    10 avril 2011

    Le vocabulaire utilisé sur ce site essaie d’éviter toute référence à la mode qui fleurit allègrement
    sur le web 2.0 et dans les entreprises qui en vivent.
    Vous êtes donc invité à bannir l’utilisation des termes "Brand", "Cloud", "Marché" etc...
    Notre motivation est avant tout de créer un outil simple, accessible à pour tout le monde, favorisant
    le partage de créations sur Internet et permettant aux auteurs de garder une autonomie optimale.
    Aucun "contrat Gold ou Premium" n’est donc prévu, aucun (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7968)

  • Alternative to ffmpeg for dynamically creating video thumbnails [closed]

    4 juillet 2021, par Daniel Rusev

    The server hosting my website doesn't have ffmpeg and I am not allowed to install any additional extensions. Is there any other way I can make video thumbnails dynamically ? Perhaps some kind of web service, where I pass the video file and as a result I get a picture file. I'm using php, by the way.

    


  • Convert ffmpeg yuv420p AVFrame to CMSampleBufferRef (kcvPixelFormatType_420YpCbCr8BiPlanarFullRange)

    21 juillet 2014, par user3272750

    I have a foscam ip camera and have access to the rtsp stream. I used DFURTSPPlayer to view the stream on my iOS device which works fine. I use a webrtc provider that lets me inject frames as CMSampleBufferRef in addition to directly reading from any of the on board cameras. I wish to use this to broadcast the IP camera stream over a secure webrtc session.

    The main loop in the DFURTSPPLayer checks if a frame is available and then converts into UIimage and sets it to an imageview.

    -(void)displayNextFrame:(NSTimer *)timer
    {
       NSTimeInterval startTime = [NSDate timeIntervalSinceReferenceDate];
       if (![video stepFrame]) {
           [timer invalidate];
           [playButton setEnabled:YES];
           [video closeAudio];
           return;
       }
       imageView.image = video.currentImage;
       float frameTime = 1.0/([NSDate timeIntervalSinceReferenceDate]-startTime);
       if (lastFrameTime<0) {
           lastFrameTime = frameTime;
       } else {
           lastFrameTime = LERP(frameTime, lastFrameTime, 0.8);
       }
       [label setText:[NSString stringWithFormat:@"%.0f",lastFrameTime]];
    }

    I’m trying to do something similar, but instead of (or in addition to) setting the UIImage I wish to also inject the frames into my webrtc service. This is an example where they use an avcapturesession. I believe I could do something similar to the runloop here and inject the frame (provided I can convert the yuv420p AVFrame into a CMSampleBufferRef :

    - (void) captureOutput:(AVCaptureOutput*) captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef) sampleBuffer
           fromConnection:(AVCaptureConnection*) connection
    {
       self.videoFrame.frameBuffer = sampleBuffer;

       // IMPORTANT: injectFrame expects a 420YpCbCr8BiPlanarFullRange and frame
       //            gets timestamped inside the service.
       NSLog(@"videoframe buffer %@",self.videoFrame.frameBuffer);
       [self.service injectFrame:self.videoFrame];
    }

    Hence my question. Most of the questiosn on stack overflow involve going in the other direction (typically broadcasting on board camera input via rtsp). I’m a n00b as far as avfoundation/corevideo is concerned. I’m prepared to put in the groundwork if someone can suggest a path. Thanks in advance !

    Edit : After reading some more on this, it seems that most important step is a conversion from 420p to 420f.

  • Seek in fragmented MP4

    15 novembre 2020, par Stefan Falk

    For my web-client I want the user to be able to play a track right away, without having to download the entire file. For this I am using a fragmented MP4 with the AAC audio coded (Mime-Type : audio/mp4; codecs="mp4a.40.2").

    


    This is the command that is being used in order to convert an input file to a fMP4 :

    


    ffmpeg -i /tmp/input.any \
  -f mp4 \
  -movflags faststart+separate_moof+empty_moov+default_base_moof \
  -acodec aac -b:a 256000 \
  -frag_duration 500K \
   /tmp/output.mp4


    


    If I look at this file on MP4Box.js, I see that the file is fragmented like this :

    


    ftyp
moov
moof
mdat
moof
mdat
..
moof
mdat
mfra


    


    This looks alright so far but the problem I am facing now is that it's not apparent to me how to start loading data from a specific timestamp without introducing an additional overhead. What I mean by this is that I need the exact byte offset of the first [moof][mdat] for a specific timestamp without the entire file being available.

    


    Let's say I have a file that looks like this :

    


    ftyp
moov
moof # 00:00
mdat 
moof # 00:01
mdat
moof # 00:02
mdat
moof # 00:03
mdat
mfra


    


    This file however, is not available on my server directly, it is being loaded from another service, but the client wants to request packets starting at 00:02.

    


    Is there a way to do this efficiently without me having to load the entire file from the other service to my server ?

    


    My guess would be to load [ftyp][moov] (or store at least this part on my own server) but as far as I know, the metadata stored in those boxes won't help me to find a byte-offset to the first [moof][mdat]-pair.

    


    Is this even possible or am I following the wrong approach here ?