Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Utilisation et configuration du script

    19 janvier 2011, par

    Informations spécifiques à la distribution Debian
    Si vous utilisez cette distribution, vous devrez activer les dépôts "debian-multimedia" comme expliqué ici :
    Depuis la version 0.3.1 du script, le dépôt peut être automatiquement activé à la suite d’une question.
    Récupération du script
    Le script d’installation peut être récupéré de deux manières différentes.
    Via svn en utilisant la commande pour récupérer le code source à jour :
    svn co (...)

Sur d’autres sites (9597)

  • Undefined symbols av_register_all()

    8 mai 2018, par JaSHin

    Good day,

    I am beginner in Objective-C and Xcode IDE. I am trying use ffmpeg in my iOS application. I cloned https://github.com/kewlbear/FFmpeg-iOS-build-script and build for arm64 and x86_64.

    When I wanted to build app it crashed with

    Ld /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3 normal x86_64
    cd /Volumes/sedy/xcode/CPP3
    export IPHONEOS_DEPLOYMENT_TARGET=9.1
    export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
    /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator9.1.sdk -L/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -L/Volumes/sedy/xcode/CPP3/CPP3/ffmpeg/lib -F/Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator -filelist /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -mios-simulator-version-min=9.1 -Xlinker -objc_abi_version -Xlinker 2 -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -lavcodec -lavdevice -lavfilter -lavformat -lavutil -lswresample -lswscale -framework AVFoundation -liconv -lbz2 -Xlinker -dependency_info -Xlinker /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Intermediates/CPP3.build/Debug-iphonesimulator/CPP3.build/Objects-normal/x86_64/CPP3_dependency_info.dat -o /Users/nikolajpognerebko/Library/Developer/Xcode/DerivedData/CPP3-eowdhpsbeagmxydsrsscofhtuwtl/Build/Products/Debug-iphonesimulator/CPP3.app/CPP3

    Undefined symbols for architecture x86_64:
     "av_register_all()", referenced from:
         Decoder::Decoder() in ViewController.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)

    There is zipped project on OneDrive http://1drv.ms/1KkPAia because it is a best way to explain my problem.

    Please help me and explain, what this problem arose.

    Thanks very much.

  • How to configure ffmpeg just to play RTSP videos

    28 février 2018, par Anuran Barman

    I have successfully compiled ffmpeg for android and everything is working fine.

    I have made specific build for each architecture and even with that it’s 9.7-9.9MB in debug version.

    My sole target is just to play RTSP video with authentication.

    What should be the command line options for this while configuring ?

    my current script looks like this

    ./configure \
           --prefix=$prefix \
           --pkg-config=/usr/bin/pkg-config \
           --enable-shared \
           --disable-static \
           --disable-doc \
           --disable-ffmpeg \
           --disable-ffplay \
           --disable-ffprobe \
           --disable-avdevice \
           --disable-symver \
           --cross-prefix=$toolchain/bin/$crossPrefix \
           --target-os=android \
           --arch=arm \
           --enable-cross-compile \
           --sysroot=$sysroot \
           --enable-network \
           --extra-cflags="$mArchFlag" \
           --extra-ldflags="$extraLDFlags"
  • Google Speech API + Go - Transcribing Audio Stream of Unknown Length

    14 février 2018, par Josh

    I have an rtmp stream of a video call and I want to transcribe it. I have created 2 services in Go and I’m getting results but it’s not very accurate and a lot of data seems to get lost.

    Let me explain.

    I have a transcode service, I use ffmpeg to transcode the video to Linear16 audio and place the output bytes onto a PubSub queue for a transcribe service to handle. Obviously there is a limit to the size of the PubSub message, and I want to start transcribing before the end of the video call. So, I chunk the transcoded data into 3 second clips (not fixed length, just seems about right) and put them onto the queue.

    The data is transcoded quite simply :

    var stdout Buffer

    cmd := exec.Command("ffmpeg", "-i", url, "-f", "s16le", "-acodec", "pcm_s16le", "-ar", "16000", "-ac", "1", "-")
    cmd.Stdout = &stdout

    if err := cmd.Start(); err != nil {
       log.Fatal(err)
    }

    ticker := time.NewTicker(3 * time.Second)

    for {
       select {
       case <-ticker.C:
           bytesConverted := stdout.Len()
           log.Infof("Converted %d bytes", bytesConverted)

           // Send the data we converted, even if there are no bytes.
           topic.Publish(ctx, &pubsub.Message{
               Data: stdout.Bytes(),
           })

           stdout.Reset()
       }
    }

    The transcribe service pulls messages from the queue at a rate of 1 every 3 seconds, helping to process the audio data at about the same rate as it’s being created. There are limits on the Speech API stream, it can’t be longer than 60 seconds so I stop the old stream and start a new one every 30 seconds so we never hit the limit, no matter how long the video call lasts for.

    This is how I’m transcribing it :

    stream := prepareNewStream()
    clipLengthTicker := time.NewTicker(30 * time.Second)
    chunkLengthTicker := time.NewTicker(3 * time.Second)

    cctx, cancel := context.WithCancel(context.TODO())
    err := subscription.Receive(cctx, func(ctx context.Context, msg *pubsub.Message) {

       select {
       case <-clipLengthTicker.C:
           log.Infof("Clip length reached.")
           log.Infof("Closing stream and starting over")

           err := stream.CloseSend()
           if err != nil {
               log.Fatalf("Could not close stream: %v", err)
           }

           go getResult(stream)
           stream = prepareNewStream()

       case <-chunkLengthTicker.C:
           log.Infof("Chunk length reached.")

           bytesConverted := len(msg.Data)

           log.Infof("Received %d bytes\n", bytesConverted)

           if bytesConverted > 0 {
               if err := stream.Send(&speechpb.StreamingRecognizeRequest{
                   StreamingRequest: &speechpb.StreamingRecognizeRequest_AudioContent{
                       AudioContent: transcodedChunk.Data,
                   },
               }); err != nil {
                   resp, _ := stream.Recv()
                   log.Errorf("Could not send audio: %v", resp.GetError())
               }
           }

           msg.Ack()
       }
    })

    I think the problem is that my 3 second chunks don’t necessarily line up with starts and end of phrases or sentences so I suspect that the Speech API is a recurrent neural network which has been trained on full sentences rather than individual words. So starting a clip in the middle of a sentence loses some data because it can’t figure out the first few words up to the natural end of a phrase. Also, I lose some data in changing from an old stream to a new stream. There’s some context lost. I guess overlapping clips might help with this.

    I have a couple of questions :

    1) Does this architecture seem appropriate for my constraints (unknown length of audio stream, etc.) ?

    2) What can I do to improve accuracy and minimise lost data ?

    (Note I’ve simplified the examples for readability. Point out if anything doesn’t make sense because I’ve been heavy handed in cutting the examples down.)