Recherche avancée

Médias (91)

Autres articles (66)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (10139)

  • Does the Remote I/O audio unit set the number of channels in the buffer ?

    25 novembre 2013, par awfulcode

    I'm using kxmovie (it's a ffmpeg-based video player) as a base for an app and I'm trying to figure out how the RemoteI/O unit works on iOS when the only thing connected to a device is headphones and we're playing a track with more than 2 channels (say a surround 6 track channel). It seems like it is going with the output channel setting and the buffer only has 2 channels. Is this because of Core Audio's pull structure ? And if so, what's happening to the other channels in the track ? Are they being downmixed or simply ignored ?

    The code for the render callback connected to the remoteio unit is here :

    - (BOOL) renderFrames: (UInt32) numFrames
                  ioData: (AudioBufferList *) ioData
    {
       NSLog(@"Number of channels in buffer: %lu",ioData->mNumberBuffers);

       for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
           memset(ioData->mBuffers[iBuffer].mData, 0, ioData->mBuffers[iBuffer].mDataByteSize);
       }


       if (_playing && _outputBlock ) {

           // Collect data to render from the callbacks
           _outputBlock(_outData, numFrames, _numOutputChannels);

           // Put the rendered data into the output buffer
           if (_numBytesPerSample == 4) // then we've already got floats
           {
               float zero = 0.0;

               for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {

                   int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;

                   for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
                       vDSP_vsadd(_outData+iChannel, _numOutputChannels, &zero, (float *)ioData->mBuffers[iBuffer].mData, thisNumChannels, numFrames);
                   }
               }
           }
           else if (_numBytesPerSample == 2) // then we need to convert SInt16 -> Float (and also scale)
           {
               float scale = (float)INT16_MAX;
               vDSP_vsmul(_outData, 1, &scale, _outData, 1, numFrames*_numOutputChannels);

               for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {

                   int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;

                   for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
                       vDSP_vfix16(_outData+iChannel, _numOutputChannels, (SInt16 *)ioData->mBuffers[iBuffer].mData+iChannel, thisNumChannels, numFrames);
                   }
               }

           }        
       }

       return noErr;
    }

    Thanks !

    edit : Here's the code for the ASBD (_ouputFormat). It's getting its values straight from the remoteio. You can also check the whole method file here.

    if (checkError(AudioUnitGetProperty(_audioUnit,
                                       kAudioUnitProperty_StreamFormat,
                                       kAudioUnitScope_Input,
                                       0,
                                       &_outputFormat,
                                       &size),
                  "Couldn't get the hardware output stream format"))
       return NO;


    _outputFormat.mSampleRate = _samplingRate;
    if (checkError(AudioUnitSetProperty(_audioUnit,
                                       kAudioUnitProperty_StreamFormat,
                                       kAudioUnitScope_Input,
                                       0,
                                       &_outputFormat,
                                       size),
                  "Couldn't set the hardware output stream format")) {

       // just warning
    }

    _numBytesPerSample = _outputFormat.mBitsPerChannel / 8;
    _numOutputChannels = _outputFormat.mChannelsPerFrame;

    NSLog(@"Current output bytes per sample: %ld", _numBytesPerSample);
    NSLog(@"Current output num channels: %ld", _numOutputChannels);

    // Slap a render callback on the unit
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = renderCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);

    if (checkError(AudioUnitSetProperty(_audioUnit,
                                       kAudioUnitProperty_SetRenderCallback,
                                       kAudioUnitScope_Input,
                                       0,
                                       &callbackStruct,
                                       sizeof(callbackStruct)),
                  "Couldn't set the render callback on the audio unit"))
       return NO;
  • Streaming webm with ffmpeg/ffserver

    20 octobre 2014, par Mediocre Gopher

    I’m attempting to cast my desktop screen to an ffserver and stream it as a webm. I’m using the following ffserver configuration :

    <feed>               # This is the input feed where FFmpeg will send
      File ./feed1.ffm            # video stream.
      FileMaxSize 1G              # Maximum file size for buffering video
      ACL allow 127.0.0.1
      ACL allow localhost
    </feed>

    <stream>              # Output stream URL definition
      Feed feed1.ffm              # Feed from which to receive video
      Format webm

      # Audio settings
      AudioCodec vorbis
      AudioBitRate 64             # Audio bitrate

      # Video settings
      VideoCodec libvpx
      VideoSize 720x576           # Video resolution
      VideoFrameRate 25           # Video FPS

      AVOptionVideo cpu-used 10
      AVOptionVideo qmin 10
      AVOptionVideo qmax 42
      AVOptionVideo quality good
      AVOptionAudio flags +global_header
      PreRoll 15
      StartSendOnKey
      VideoBitRate 400            # Video bitrate
    </stream>

    And the following command on my desktop :

    ffmpeg -f x11grab -r 25 -s 1280x800 -i :0.0 -f alsa -i pulse http://127.0.0.1:8090/feed1.ffm

    With ffmpeg being version 2.4.2 and with libvpx enabled (latest on Arch). I get the error :

    [libvpx @ 0x20a21a0] CQ level 0 must be between minimum and maximum quantizer value (10-42)

    On the client side. As far as I can tell from calling ffmpeg -h full there’s no way of setting the cq-level, and setting qmin to 0 doesn’t work (it ends up as 3 for some reason, I guess ffmpeg enforces a minimum).

    This configuration seems to have worked for others on the internet, but I can’t see how if cq-level defaults 0. If anyone has any ideas I’d really appreciate it.

  • avfilter/blend_modes : Always preserve constness

    7 septembre 2023, par Andreas Rheinhardt
    avfilter/blend_modes : Always preserve constness
    

    These casts cast const away temporarily ; they are safe, because
    the pointers that are initialized point to const data. But this
    is nevertheless not nice and leads to warnings when using
    - Wcast-qual. blend_modes.c generates 546 (2*39*7) such warnings
    which is the majority of such warnings for FFmpeg as a whole.
    vf_blend.c and vf_blend_init.h also use this pattern ;
    they have also been changed.

    Reviewed-by : Paul B Mahol <onemda@gmail.com>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@outlook.com>

    • [DH] libavfilter/blend_modes.c
    • [DH] libavfilter/vf_blend.c
    • [DH] libavfilter/vf_blend_init.h