Recherche avancée

Médias (0)

Mot : - Tags -/optimisation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (43)

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (5638)

  • What's the most desireable way to capture system display and audio in the form of individual encoded audio and video packets in go (language) ? [closed]

    11 janvier 2023, par Tiger Yang

    Question (read the context below first) :

    


    For those of you familiar with the capabilities of go, Is there a better way to go about all this ? Since ffmpeg is so ubiquitous, I'm sure it's been optomized to perfection, but what's the best way to capture system display and audio in the form of individual encoded audio and video packets in go (language), so that they can be then sent via webtransport-go ? I wish for it to prioritize efficiency and low latency, and ideally capture and encode the framebuffer directly like ffmpeg does.

    


    Thanks ! I have many other questions about this, but I think it's best to ask as I go.

    


    Context and what I've done so far :

    


    I'm writing a remote desktop software for my personal use because of grievances with current solutions out there. At the moment, it consists of a web app that uses the webtransport API to send input datagrams and receive AV packets on two dedicated unidirectional streams, and the webcodecs API to decode these packets. On the serverside, I originally planned to use python with the aioquic library as a webtransport server. Upon connection and authentication, the server would start ffmpeg as a subprocess with this command :

    


    ffmpeg -init_hw_device d3d11va -filter_complex ddagrab=video_size=1920x1080:framerate=60 -vcodec hevc_nvenc -tune ll -preset p7 -spatial_aq 1 -temporal_aq 1 -forced-idr 1 -rc cbr -b:v 400K -no-scenecut 1 -g 216000 -f hevc -

    


    What I really appreciate about this is that it uses windows' desktop duplication API to copy the framebuffer of my GPU and hand that directly to the on-die hardware encoder with zero round trips to the CPU. I think it's about as efficient and elegant a solution as I can manage. It then outputs the encoded stream to the stdout, which python can read and send to the client.

    


    As for the audio, there is another ffmpeg instance :

    


    ffmpeg -f dshow -channels 2 -sample_rate 48000 -sample_size 16 -audio_buffer_size 15 -i audio="RD Audio (High Definition Audio Device)" -acodec libopus -vbr on -application audio -mapping_family 0 -apply_phase_inv true -b:a 25K -fec false -packet_loss 0 -map 0 -f data -

    


    which listens to a physical loopback interface, which is literally just a short wire bridging the front panel headphone and microphone jacks (I'm aware of the quality loss of converting to analog and back, but the audio is then crushed down to 25kbps so it's fine) ()

    


    Unfortunately, aioquic was not easy to work with IMO, and I found webtransport-go https://github.com/adriancable/webtransport-go, which was a hell of a lot better in both simplicity and documentation. However, now I'm dealing with a whole new language, and I wanna ask : (above)

    


    EDIT : Here's the code for my server so far :

    


    

    

    package main

import (
    "bytes"
    "context"
    "fmt"
    "log"
    "net/http"
    "os/exec"
    "time"

    "github.com/adriancable/webtransport-go"
)

func warn(str string) {
    fmt.Printf("\n===== WARNING ===================================================================================================\n   %s\n=================================================================================================================\n", str)
}

func main() {

    password := []byte("abc")

    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }

    audioString := []string{
        "ffmpeg",
        "-f", "dshow",
        "-channels", "2",
        "-sample_rate", "48000",
        "-sample_size", "16",
        "-audio_buffer_size", "15",
        "-i", "audio=RD Audio (High Definition Audio Device)",
        "-acodec", "libopus",
        "-mapping_family", "0",
        "-b:a", "25K",
        "-map", "0",
        "-f", "data", "-",
    }

    connected := false

    http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
        session := request.Body.(*webtransport.Session)

        session.AcceptSession()
        fmt.Println("\nAccepted incoming WebTransport connection.")
        fmt.Println("Awaiting authentication...")

        authData, err := session.ReceiveMessage(session.Context()) // Waits here till first datagram
        if err != nil {                                            // if client closes connection before sending anything
            fmt.Println("\nConnection closed:", err)
            return
        }

        if len(authData) >= 2 && bytes.Equal(authData[2:], password) {
            if connected {
                session.CloseSession()
                warn("Client has authenticated, but a session is already taking place! Connection closed.")
                return
            } else {
                connected = true
                fmt.Println("Client has authenticated!\n")
            }
        } else {
            session.CloseSession()
            warn("Client has failed authentication! Connection closed. (" + string(authData[2:]) + ")")
            return
        }

        videoStream, _ := session.OpenUniStreamSync(session.Context())

        videoCmd := exec.Command(videoString[0], videoString[1:]...)
        go func() {
            videoOut, _ := videoCmd.StdoutPipe()
            videoCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := videoOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    videoStream.Write(buffer[:len])
                }
            }
        }()

        time.Sleep(50 * time.Millisecond)

        audioStream, err := session.OpenUniStreamSync(session.Context())

        audioCmd := exec.Command(audioString[0], audioString[1:]...)
        go func() {
            audioOut, _ := audioCmd.StdoutPipe()
            audioCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := audioOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    audioStream.Write(buffer[:len])
                }
            }
        }()

        for {
            data, err := session.ReceiveMessage(session.Context())
            if err != nil {
                videoCmd.Process.Kill()
                audioCmd.Process.Kill()

                connected = false

                fmt.Println("\nConnection closed:", err)
                break
            }

            if len(data) == 0 {

            } else if data[0] == byte(0) {
                fmt.Printf("Received mouse datagram: %s\n", data)
            }
        }

    })

    server := &webtransport.Server{
        ListenAddr: ":1024",
        TLSCert:    webtransport.CertFile{Path: "SSL/fullchain.pem"},
        TLSKey:     webtransport.CertFile{Path: "SSL/privkey.pem"},
        QuicConfig: &webtransport.QuicConfig{
            KeepAlive:      false,
            MaxIdleTimeout: 3 * time.Second,
        },
    }

    fmt.Println("Launching WebTransport server at", server.ListenAddr)
    ctx, cancel := context.WithCancel(context.Background())
    if err := server.Run(ctx); err != nil {
        log.Fatal(err)
        cancel()
    }

}

    


    


    



  • ffmpeg : creating a mpeg1 video stream

    7 mars 2014, par phill

    i'm trying to create a short video file out of a single image for 5 seconds on a windows machine using ffmpeg. The video file is to be used to concat in front of video files taken with a camera that produces 1920x1080 60fps. The following creates 1 second stream instead of a 5 second one. Any ideas ? thanks advance.

    "c :\program files\ffmpeg\ffmpeg32" -f image2 -i "c :\program files\ffmpeg\imput1.jpg" -loop 1 -vcodec mpeg1video -b:v
    104857200 -r 59.94 -s 1920x1080 -aspect 16:9 -t 5 "c :\program files\ffmpeg\banner.MPG"

    Here are my output results :

    c :\Program Files\ffmpeg>"c :\program files\ffmpeg\ffmpeg32" -f image2 -i "input1.jpg" -loop 1 -vcodec mpeg1video -b:v 10
    4857200 -r 59.94 -s 1920x1080 -aspect 16:9 -t 5 "c :\program files\ffmpeg\banner.
    MPG"
    ffmpeg version N-52045-g694fa00 Copyright (c) 2000-2013 the FFmpeg developers
    built on Apr 12 2013 16:54:51 with gcc 4.8.0 (GCC)
    configuration : —enable-gpl —enable-version3 —disable-w32threads —enable-av
    isynth —enable-bzlib —enable-fontconfig —enable-frei0r —enable-gnutls —enab
    le-iconv —enable-libass —enable-libbluray —enable-libcaca —enable-libfreetyp
    e —enable-libgsm —enable-libilbc —enable-libmp3lame —enable-libopencore-amrn
    b —enable-libopencore-amrwb —enable-libopenjpeg —enable-libopus —enable-libr
    tmp —enable-libschroedinger —enable-libsoxr —enable-libspeex —enable-libtheo
    ra —enable-libtwolame —enable-libvo-aacenc —enable-libvo-amrwbenc —enable-li
    bvorbis —enable-libvpx —enable-libx264 —enable-libxavs —enable-libxvid —ena
    ble-zlib
    libavutil 52. 26.100 / 52. 26.100
    libavcodec 55. 2.100 / 55. 2.100
    libavformat 55. 2.100 / 55. 2.100
    libavdevice 55. 0.100 / 55. 0.100
    libavfilter 3. 53.101 / 3. 53.101
    libswscale 2. 2.100 / 2. 2.100
    libswresample 0. 17.102 / 0. 17.102
    libpostproc 52. 3.100 / 52. 3.100
    Input #0, image2, from 'c :\program files\ffmpeg\input1.jpg' :
    Duration : 00:00:00.04, start : 0.000000, bitrate : N/A
    Stream #0:0 : Video : mjpeg, yuvj420p, 722x267 [SAR 1:1 DAR 722:267], 25 tbr,
    25 tbn, 25 tbc
    File 'c :\program files\ffmpeg\banner.MPG' already exists. Overwrite ? [y/N] y
    VBV buffer size not set, muxing may fail
    Output #0, mpeg, to 'c :\program files\ffmpeg\banner.MPG' :
    Metadata :
    encoder : Lavf55.2.100
    Stream #0:0 : Video : mpeg1video, yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-3
    1, 104857 kb/s, 90k tbn, 59.94 tbc
    Stream mapping :
    Stream #0:0 -> #0:0 (mjpeg -> mpeg1video)
    Press [q] to stop, [?] for help
    frame= 1 fps=0.0 q=9.8 Lsize= 130kB time=00:00:00.01 bitrate=63812.1kbits
    /s
    video:130kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.373990%

    c :\Program Files\ffmpeg>

  • qt faststart and ffmpeg to generate a live mp4 file [duplicate]

    27 février 2014, par Dnaso

    This question already has an answer here :

    I am using ffmpeg to create an mp4 file on my server. I am also trying to use qt fast start to be able to move the moov atom to the front so it will stream. I have searched all over the internet with no luck. Is it possible to put my video/audio in a mp4 buffer type file and then be able to play it while ffmpeg is still dumping video and audio data into the stream ? the point is I am trying to stream from a camera and Android is horrid... I know both ios and android support mp4 so I was trying to figure a way I can make my rtsp Mp4.

    main point of the story : I want to continuously feed my mp4 container my camera feed and still be able to playback the file os my clients can watch.

    any help appreciated thank you.