Recherche avancée

Médias (91)

Autres articles (76)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (7298)

  • Unable to install ffmpeg on CircleCi 2.0

    16 novembre 2017, par Mateusz Urbański

    I have Ruby on Rails project which uses CircleCi to run tests. In the past I was using CircleCi 1.0 but now I migrated to CircleCi 2.0. I have problem with installing ffmpeg. CircleCi 2.0 uses Ubuntu 14.04. I install ffmpeg like this :

    # ffmpeg installation
    sudo add-apt-repository ppa:mc3man/trusty-media -y
    sudo apt-get update
    sudo apt-get install ffmpeg

    and my circle.yml config file looks like this :

    version: 2
    environment:
     TZ: "/usr/share/zoneinfo/America/Los_Angeles"

    jobs:
     build:
       parallelism: 2
       working_directory: ~/circleci-survey-builder
       docker:
         - image: circleci/ruby:2.4.1-node
           environment:
             PGHOST: 127.0.0.1
             PGUSER: ubuntu
             RAILS_ENV: test
         - image: circleci/postgres:9.6-alpine
           environment:
             POSTGRES_USER: ubuntu
             POSTGRES_DB: circle_test
             POSTGRES_PASSWORD: ''
       steps:
         - checkout

         - run:
             name: 'Install CircleCI dependencies'
             command: bash deploy/circle-dependencies.sh

         - type: cache-restore
           key: dashboard-{{ checksum "Gemfile.lock" }}

         - run:
             name: 'Install gems'
             command: bundle install --path vendor/bundle

         - type: cache-save
           key: dashboard-{{ checksum "Gemfile.lock" }}
           paths:
             - vendor/bundle

         - run:
             name: 'Install postgresql-client'
             command: sudo apt install postgresql-client

         - run:
             name: 'Create database.yml'
             command: mv config/database.ci.yml config/database.yml

         - run:
             name: Set up SurveyBuilder database
             command: bundle exec rake db:structure:load --trace

         - run:
             name: 'Run tests'
             command: |
               bundle exec rspec spec

    It returns following error when I run build on CircleCi :

    #!/bin/bash -eo pipefail
    bash deploy/circle-dependencies.sh

    Fetching: bundler-1.16.0.gem (100%)
    Successfully installed bundler-1.16.0
    1 gem installed
    sudo: add-apt-repository: command not found


    Ign http://deb.debian.org jessie InRelease

    Get:1 http://deb.debian.org jessie-updates InRelease [145 kB]

    10% [1 InRelease 13.8 kB/145 kB 10%] [Waiting for headers] [Connecting to securHit http://deb.debian.org jessie Release.gpg

    Hit http://deb.debian.org jessie Release

    Get:2 http://deb.debian.org jessie-updates/main amd64 Packages [23.2 kB]

    Get:3 http://deb.debian.org jessie/main amd64 Packages [9063 kB]

    Get:4 http://security.debian.org jessie/updates InRelease [63.1 kB]

    Get:5 http://security.debian.org jessie/updates/main amd64 Packages [610 kB]

    100% [3 Packages 9063 kB]100% [3 Packages 9063 kB]Fetched 9904 kB in 1s (8178 kB/s)

    Reading package lists... 1%
    Reading package lists... 61%
    Reading package lists... Done


    Reading package lists... 1%
    Reading package lists... Done


    Building dependency tree... 0%
    Building dependency tree      


    Reading state information... Done

    Package ffmpeg is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source

    E: Package 'ffmpeg' has no installation candidate
    Exited with code 100

    How can I fix that ?

  • Does the Remote I/O audio unit set the number of channels in the buffer ?

    25 novembre 2013, par awfulcode

    I'm using kxmovie (it's a ffmpeg-based video player) as a base for an app and I'm trying to figure out how the RemoteI/O unit works on iOS when the only thing connected to a device is headphones and we're playing a track with more than 2 channels (say a surround 6 track channel). It seems like it is going with the output channel setting and the buffer only has 2 channels. Is this because of Core Audio's pull structure ? And if so, what's happening to the other channels in the track ? Are they being downmixed or simply ignored ?

    The code for the render callback connected to the remoteio unit is here :

    - (BOOL) renderFrames: (UInt32) numFrames
                  ioData: (AudioBufferList *) ioData
    {
       NSLog(@"Number of channels in buffer: %lu",ioData->mNumberBuffers);

       for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
           memset(ioData->mBuffers[iBuffer].mData, 0, ioData->mBuffers[iBuffer].mDataByteSize);
       }


       if (_playing && _outputBlock ) {

           // Collect data to render from the callbacks
           _outputBlock(_outData, numFrames, _numOutputChannels);

           // Put the rendered data into the output buffer
           if (_numBytesPerSample == 4) // then we've already got floats
           {
               float zero = 0.0;

               for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {

                   int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;

                   for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
                       vDSP_vsadd(_outData+iChannel, _numOutputChannels, &zero, (float *)ioData->mBuffers[iBuffer].mData, thisNumChannels, numFrames);
                   }
               }
           }
           else if (_numBytesPerSample == 2) // then we need to convert SInt16 -> Float (and also scale)
           {
               float scale = (float)INT16_MAX;
               vDSP_vsmul(_outData, 1, &scale, _outData, 1, numFrames*_numOutputChannels);

               for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {

                   int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;

                   for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
                       vDSP_vfix16(_outData+iChannel, _numOutputChannels, (SInt16 *)ioData->mBuffers[iBuffer].mData+iChannel, thisNumChannels, numFrames);
                   }
               }

           }        
       }

       return noErr;
    }

    Thanks !

    edit : Here's the code for the ASBD (_ouputFormat). It's getting its values straight from the remoteio. You can also check the whole method file here.

    if (checkError(AudioUnitGetProperty(_audioUnit,
                                       kAudioUnitProperty_StreamFormat,
                                       kAudioUnitScope_Input,
                                       0,
                                       &_outputFormat,
                                       &size),
                  "Couldn't get the hardware output stream format"))
       return NO;


    _outputFormat.mSampleRate = _samplingRate;
    if (checkError(AudioUnitSetProperty(_audioUnit,
                                       kAudioUnitProperty_StreamFormat,
                                       kAudioUnitScope_Input,
                                       0,
                                       &_outputFormat,
                                       size),
                  "Couldn't set the hardware output stream format")) {

       // just warning
    }

    _numBytesPerSample = _outputFormat.mBitsPerChannel / 8;
    _numOutputChannels = _outputFormat.mChannelsPerFrame;

    NSLog(@"Current output bytes per sample: %ld", _numBytesPerSample);
    NSLog(@"Current output num channels: %ld", _numOutputChannels);

    // Slap a render callback on the unit
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = renderCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);

    if (checkError(AudioUnitSetProperty(_audioUnit,
                                       kAudioUnitProperty_SetRenderCallback,
                                       kAudioUnitScope_Input,
                                       0,
                                       &callbackStruct,
                                       sizeof(callbackStruct)),
                  "Couldn't set the render callback on the audio unit"))
       return NO;
  • FFMPEG multi livestream - recorded stream send to different services like YT and Twitch at different time (on different button clicks )

    4 octobre 2022, par Ganesh

    Trying for the last 10 days and still no success, I am creating a python application that will accept the URL and visit that URL using chromium, capture that screen and send that real-time screen recording to different live stream acceptors as youtube live, twitch Twitter, Facebook live or some other sources and many of these could be multiple.

    


    There are two challenges (both challenges depend on a user action like different button clicks) -

    


      

    • The time of starting the Livestream we know only one Livestream acceptor and other acceptors could be sent via another API at any time or may not be sent on the whole live stream.
    • 


    • Any of the streams could be stopped at any moment including the first one which started the original live streaming service
    • 


    


    To Solve these challenges I am trying the following process (i took mp4 as a source for simplifying)

    


      

    • create a stream and store it into PIPE.stdout
    • 


    


    ffmpeg_Command_get_stream = 'ffmpeg -re -i test.mp4 -f flv pipe:1'
ffmpeg_Command_get_stream=ffmpeg_Command_get_stream.split()
pipe = sp.Popen(ffmpeg_Command_get_stream,
            stdout=sp.PIPE,
            stderr=sp.PIPE,
            bufsize=8000000,
            shell=True,
            universal_newlines=True
            )
out,err = pipe.communicate()


    


      

    • and send that stream with the help of FFMPEG to the Livestream acceptor with the click of the youtube Livestream button

      


      ffmpeg_Command_send_stream = ['ffmpeg','-i',pipe.stdout,'-f','flv',RTMPURL_YOUTUBE]

      


    • 


    


    Update Trying to Explain it a little more :

    


    step 1 - I need a real-time stream from the first command, so I used -re in FFMPEG

    


    step 2 - Use above stream as an input for other command and send that as an output as a Livestream to youtube (or twitch/Facebook), But the second step would happen only when the user click on the button "YT LiveStream", Here the tricky thing is there are multiple buttons (YT LiveStream, Twitch LiveStream, Facebook LiveStream) and user can click any time on any of button, also can click on all button one by one.

    


    enter image description here

    


    sorry for bad explaination

    


    what I am doing wrong ? , Is this Possible ? or need to go with another process,

    


    any help would be greatly appreciated