Recherche avancée

Médias (91)

Autres articles (38)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8969)

  • What's the most desireable way to capture system display and audio in the form of individual encoded audio and video packets in go (language) ? [closed]

    11 janvier 2023, par Tiger Yang

    Question (read the context below first) :

    


    For those of you familiar with the capabilities of go, Is there a better way to go about all this ? Since ffmpeg is so ubiquitous, I'm sure it's been optomized to perfection, but what's the best way to capture system display and audio in the form of individual encoded audio and video packets in go (language), so that they can be then sent via webtransport-go ? I wish for it to prioritize efficiency and low latency, and ideally capture and encode the framebuffer directly like ffmpeg does.

    


    Thanks ! I have many other questions about this, but I think it's best to ask as I go.

    


    Context and what I've done so far :

    


    I'm writing a remote desktop software for my personal use because of grievances with current solutions out there. At the moment, it consists of a web app that uses the webtransport API to send input datagrams and receive AV packets on two dedicated unidirectional streams, and the webcodecs API to decode these packets. On the serverside, I originally planned to use python with the aioquic library as a webtransport server. Upon connection and authentication, the server would start ffmpeg as a subprocess with this command :

    


    ffmpeg -init_hw_device d3d11va -filter_complex ddagrab=video_size=1920x1080:framerate=60 -vcodec hevc_nvenc -tune ll -preset p7 -spatial_aq 1 -temporal_aq 1 -forced-idr 1 -rc cbr -b:v 400K -no-scenecut 1 -g 216000 -f hevc -

    


    What I really appreciate about this is that it uses windows' desktop duplication API to copy the framebuffer of my GPU and hand that directly to the on-die hardware encoder with zero round trips to the CPU. I think it's about as efficient and elegant a solution as I can manage. It then outputs the encoded stream to the stdout, which python can read and send to the client.

    


    As for the audio, there is another ffmpeg instance :

    


    ffmpeg -f dshow -channels 2 -sample_rate 48000 -sample_size 16 -audio_buffer_size 15 -i audio="RD Audio (High Definition Audio Device)" -acodec libopus -vbr on -application audio -mapping_family 0 -apply_phase_inv true -b:a 25K -fec false -packet_loss 0 -map 0 -f data -

    


    which listens to a physical loopback interface, which is literally just a short wire bridging the front panel headphone and microphone jacks (I'm aware of the quality loss of converting to analog and back, but the audio is then crushed down to 25kbps so it's fine) ()

    


    Unfortunately, aioquic was not easy to work with IMO, and I found webtransport-go https://github.com/adriancable/webtransport-go, which was a hell of a lot better in both simplicity and documentation. However, now I'm dealing with a whole new language, and I wanna ask : (above)

    


    EDIT : Here's the code for my server so far :

    


    

    

    package main

import (
    "bytes"
    "context"
    "fmt"
    "log"
    "net/http"
    "os/exec"
    "time"

    "github.com/adriancable/webtransport-go"
)

func warn(str string) {
    fmt.Printf("\n===== WARNING ===================================================================================================\n   %s\n=================================================================================================================\n", str)
}

func main() {

    password := []byte("abc")

    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }

    audioString := []string{
        "ffmpeg",
        "-f", "dshow",
        "-channels", "2",
        "-sample_rate", "48000",
        "-sample_size", "16",
        "-audio_buffer_size", "15",
        "-i", "audio=RD Audio (High Definition Audio Device)",
        "-acodec", "libopus",
        "-mapping_family", "0",
        "-b:a", "25K",
        "-map", "0",
        "-f", "data", "-",
    }

    connected := false

    http.HandleFunc("/", func(writer http.ResponseWriter, request *http.Request) {
        session := request.Body.(*webtransport.Session)

        session.AcceptSession()
        fmt.Println("\nAccepted incoming WebTransport connection.")
        fmt.Println("Awaiting authentication...")

        authData, err := session.ReceiveMessage(session.Context()) // Waits here till first datagram
        if err != nil {                                            // if client closes connection before sending anything
            fmt.Println("\nConnection closed:", err)
            return
        }

        if len(authData) >= 2 && bytes.Equal(authData[2:], password) {
            if connected {
                session.CloseSession()
                warn("Client has authenticated, but a session is already taking place! Connection closed.")
                return
            } else {
                connected = true
                fmt.Println("Client has authenticated!\n")
            }
        } else {
            session.CloseSession()
            warn("Client has failed authentication! Connection closed. (" + string(authData[2:]) + ")")
            return
        }

        videoStream, _ := session.OpenUniStreamSync(session.Context())

        videoCmd := exec.Command(videoString[0], videoString[1:]...)
        go func() {
            videoOut, _ := videoCmd.StdoutPipe()
            videoCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := videoOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    videoStream.Write(buffer[:len])
                }
            }
        }()

        time.Sleep(50 * time.Millisecond)

        audioStream, err := session.OpenUniStreamSync(session.Context())

        audioCmd := exec.Command(audioString[0], audioString[1:]...)
        go func() {
            audioOut, _ := audioCmd.StdoutPipe()
            audioCmd.Start()

            buffer := make([]byte, 15000)
            for {
                len, err := audioOut.Read(buffer)
                if err != nil {
                    break
                }
                if len > 0 {
                    audioStream.Write(buffer[:len])
                }
            }
        }()

        for {
            data, err := session.ReceiveMessage(session.Context())
            if err != nil {
                videoCmd.Process.Kill()
                audioCmd.Process.Kill()

                connected = false

                fmt.Println("\nConnection closed:", err)
                break
            }

            if len(data) == 0 {

            } else if data[0] == byte(0) {
                fmt.Printf("Received mouse datagram: %s\n", data)
            }
        }

    })

    server := &webtransport.Server{
        ListenAddr: ":1024",
        TLSCert:    webtransport.CertFile{Path: "SSL/fullchain.pem"},
        TLSKey:     webtransport.CertFile{Path: "SSL/privkey.pem"},
        QuicConfig: &webtransport.QuicConfig{
            KeepAlive:      false,
            MaxIdleTimeout: 3 * time.Second,
        },
    }

    fmt.Println("Launching WebTransport server at", server.ListenAddr)
    ctx, cancel := context.WithCancel(context.Background())
    if err := server.Run(ctx); err != nil {
        log.Fatal(err)
        cancel()
    }

}

    


    


    



  • Discord 24/7 video stream self-bot crashes after a couple hours

    21 juillet 2023, par angelo

    I've implemented this library to make a self-bot that streams videos from a local folder in a loop 24/7 (don't ask me why). I set up an ubuntu vps to run the bot and it works perfectly fine the first 2-3 hours, after that it gets more and more laggy until the server crashes.
pd : It's basically my first time using javascript and stole most of the code from this repo so don't bully me.

    


    Here's the code :

    


    import { Client, TextChannel, CustomStatus, ActivityOptions } from "discord.js-selfbot-v13";
import { command, streamLivestreamVideo, VoiceUdp, setStreamOpts, streamOpts } from "@dank074/discord-video-stream";
import config from "./config.json";
import fs from 'fs';
import path from 'path';

const client = new Client();

client.patchVoiceEvents(); //this is necessary to register event handlers

setStreamOpts(
    config.streamOpts.width,
    config.streamOpts.height,
    config.streamOpts.fps,
    config.streamOpts.bitrateKbps,
    config.streamOpts.hardware_acc
)

const prefix = '$';

const moviesFolder = config.movieFolder || './movies';

const movieFiles = fs.readdirSync(moviesFolder);
let movies = movieFiles.map(file => {
    const fileName = path.parse(file).name;
    // replace space with _
    return { name: fileName.replace(/ /g, ''), path: path.join(moviesFolder, file) };
});
let originalMovList = [...movies];
let movList = movies;
let shouldStop = false;

// print out all movies
console.log(`Available movies:\n${movies.map(m => m.name).join('\n')}`);

const status_idle = () =>  {
    return new CustomStatus()
        .setState('摸鱼进行中')
        .setEmoji('🐟')
}

const status_watch = (name) => {
    return new CustomStatus()
        .setState(`Playing ${name}...`)
        .setEmoji('📽')
}

// ready event
client.on("ready", () => {
    if (client.user) {
        console.log(`--- ${client.user.tag} is ready ---`);
        client.user.setActivity(status_idle() as ActivityOptions)
    }
});

let streamStatus = {
    joined: false,
    joinsucc: false,
    playing: false,
    channelInfo: {
        guildId: '',
        channelId: '',
        cmdChannelId: ''
    },
    starttime: "00:00:00",
    timemark: '',
}

client.on('voiceStateUpdate', (oldState, newState) => {
    // when exit channel
    if (oldState.member?.user.id == client.user?.id) {
        if (oldState.channelId && !newState.channelId) {
            streamStatus.joined = false;
            streamStatus.joinsucc = false;
            streamStatus.playing = false;
            streamStatus.channelInfo = {
                guildId: '',
                channelId: '',
                cmdChannelId: streamStatus.channelInfo.cmdChannelId
            }
            client.user?.setActivity(status_idle() as ActivityOptions)
        }
    }
    // when join channel success
    if (newState.member?.user.id == client.user?.id) {
        if (newState.channelId && !oldState.channelId) {
            streamStatus.joined = true;
            if (newState.guild.id == streamStatus.channelInfo.guildId && newState.channelId == streamStatus.channelInfo.channelId) {
                streamStatus.joinsucc = true;
            }
        }
    }
})

client.on('messageCreate', async (message) => {
    if (message.author.bot) return; // ignore bots
    if (message.author.id == client.user?.id) return; // ignore self
    if (!config.commandChannels.includes(message.channel.id)) return; // ignore non-command channels
    if (!message.content.startsWith(prefix)) return; // ignore non-commands

    const args = message.content.slice(prefix.length).trim().split(/ +/); // split command and arguments
    if (args.length == 0) return;

    const user_cmd = args.shift()!.toLowerCase();

    if (config.commandChannels.includes(message.channel.id)) {
        switch (user_cmd) {
            case 'play':
                playCommand(args, message);
                break;
            case 'stop':
                stopCommand(message);
                break;
            case 'playtime':
                playtimeCommand(message);
                break;
            case 'pause':
                pauseCommand(message);
                break;
            case 'resume':
                resumeCommand(message);
                break;
            case 'list':
                listCommand(message);
                break;
            case 'status':
                statusCommand(message);
                break;
            case 'refresh':
                refreshCommand(message);
                break;
            case 'help':
                helpCommand(message);
                break;
            case 'playall':
                playAllCommand(args, message);
                break;
            case 'stream':
                streamCommand(args, message);
                break;
            case 'shuffle':
                shuffleCommand();
                break;
            case 'skip':
                //skip cmd
                break;
            default:
                message.reply('Invalid command');
        }
    }
});

client.login("TOKEN_HERE");

let lastPrint = "";

async function playAllCommand(args, message) {
    if (streamStatus.joined) {
        message.reply("Already joined");
        return;
    }

    // args = [guildId]/[channelId]
    if (args.length === 0) {
        message.reply("Missing voice channel");
        return;
    }

    // process args
    const [guildId, channelId] = args.shift()!.split("/");
    if (!guildId || !channelId) {
        message.reply("Invalid voice channel");
        return;
    }

    await client.joinVoice(guildId, channelId);
    streamStatus.joined = true;
    streamStatus.playing = false;
    streamStatus.starttime = "00:00:00";
    streamStatus.channelInfo = {
        guildId: guildId,
        channelId: channelId,
        cmdChannelId: message.channel.id,
    };

    const streamUdpConn = await client.createStream();

    streamUdpConn.voiceConnection.setSpeaking(true);
    streamUdpConn.voiceConnection.setVideoStatus(true);

    playAllVideos(streamUdpConn); // Start playing videos

    // Keep the stream open

    streamStatus.joined = false;
    streamStatus.joinsucc = false;
    streamStatus.playing = false;
    lastPrint = "";
    streamStatus.channelInfo = {
        guildId: "",
        channelId: "",
        cmdChannelId: "",
    };
}

async function playAllVideos(udpConn: VoiceUdp) {

    console.log("Started playing video");

    udpConn.voiceConnection.setSpeaking(true);
    udpConn.voiceConnection.setVideoStatus(true);

    try {
        let index = 0;

        while (true) {
            if (shouldStop) {
                break; // For the stop command
            }

            if (index >= movies.length) {
                // Reset the loop
                index = 0;
            }

            const movie = movList[index];

            if (!movie) {
                console.log("Movie not found");
                index++;
                continue;
            }

            let options = {};
            options["-ss"] = "00:00:00";

            console.log(`Playing ${movie.name}...`);

            try {
                let videoStream = streamLivestreamVideo(movie.path, udpConn);
                command?.on('progress', (msg) => {
                    // print timemark if it passed 10 second sionce last print, becareful when it pass 0
                    if (streamStatus.timemark) {
                        if (lastPrint != "") {
                            let last = lastPrint.split(':');
                            let now = msg.timemark.split(':');
                            // turn to seconds
                            let s = parseInt(now[2]) + parseInt(now[1]) * 60 + parseInt(now[0]) * 3600;
                            let l = parseInt(last[2]) + parseInt(last[1]) * 60 + parseInt(last[0]) * 3600;
                            if (s - l >= 10) {
                                console.log(`Timemark: ${msg.timemark}`);
                                lastPrint = msg.timemark;
                            }
                        } else {
                            console.log(`Timemark: ${msg.timemark}`);
                            lastPrint = msg.timemark;
                        }
                    }
                    streamStatus.timemark = msg.timemark;
                });
                const res = await videoStream;
                console.log("Finished playing video " + res);
            } catch (e) {
                console.log(e);
            }

            index++; // Pasar a la siguiente película
        }
    } finally {
        udpConn.voiceConnection.setSpeaking(false);
        udpConn.voiceConnection.setVideoStatus(false);
    }

    command?.kill("SIGINT");
    // send message to channel, not reply
    (client.channels.cache.get(streamStatus.channelInfo.cmdChannelId) as TextChannel).send('Finished playing video, timemark is ' + streamStatus.timemark);
    client.leaveVoice();
    client.user?.setActivity(status_idle() as ActivityOptions)
    streamStatus.joined = false;
    streamStatus.joinsucc = false;
    streamStatus.playing = false;
    lastPrint = ""
    streamStatus.channelInfo = {
        guildId: '',
        channelId: '',
        cmdChannelId: ''
    };
}

function shuffleArray(array) {
    for (let i = array.length - 1; i > 0; i--) {
        const j = Math.floor(Math.random() * (i + 1));
        [array[i], array[j]] = [array[j], array[i]];
    }
}

function shuffleCommand() {
    shuffleArray(movList);
}

async function playCommand(args, message) {
    if (streamStatus.joined) {
        message.reply('Already joined');
        return;
    }

    // args = [guildId]/[channelId]
    if (args.length == 0) {
        message.reply('Missing voice channel');
        return;
    }

    // process args
    const [guildId, channelId] = args.shift()!.split('/');
    if (!guildId || !channelId) {
        message.reply('Invalid voice channel');
        return;
    }

    // get movie name and find movie file
    let moviename = args.shift()
    let movie = movies.find(m => m.name == moviename);

    if (!movie) {
        message.reply('Movie not found');
        return;
    }

    // get start time from args "hh:mm:ss"
    let startTime = args.shift();
    let options = {}
    // check if start time is valid
    if (startTime) {
        let time = startTime.split(':');
        if (time.length != 3) {
            message.reply('Invalid start time');
            return;
        }
        let h = parseInt(time[0]);
        let m = parseInt(time[1]);
        let s = parseInt(time[2]);
        if (isNaN(h) || isNaN(m) || isNaN(s)) {
            message.reply('Invalid start time');
            return;
        }
        startTime = `${h}:${m}:${s}`;
        options['-ss'] = startTime;
        console.log("Start time: " + startTime);
    }

    await client.joinVoice(guildId, channelId);
    streamStatus.joined = true;
    streamStatus.playing = false;
    streamStatus.starttime = startTime ? startTime : "00:00:00";
    streamStatus.channelInfo = {
        guildId: guildId,
        channelId: channelId,
        cmdChannelId: message.channel.id
    }
    const streamUdpConn = await client.createStream();
    playVideo(movie.path, streamUdpConn, options);
    message.reply('Playing ' + (startTime ? ` from ${startTime} ` : '') + moviename + '...');
    client.user?.setActivity(status_watch(moviename) as ActivityOptions);
}

function stopCommand(message) {
    client.leaveVoice()
    streamStatus.joined = false;
    streamStatus.joinsucc = false;
    streamStatus.playing = false;
    streamStatus.channelInfo = {
        guildId: '',
        channelId: '',
        cmdChannelId: streamStatus.channelInfo.cmdChannelId
    }
    // use sigquit??
    command?.kill("SIGINT");
    // msg
    message.reply('Stopped playing');
    shouldStop = true;
    movList = [...originalMovList];
}

function playtimeCommand(message) {
    // streamStatus.starttime + streamStatus.timemark
    // starttime is hh:mm:ss, timemark is hh:mm:ss.000
    let start = streamStatus.starttime.split(':');
    let mark = streamStatus.timemark.split(':');
    let h = parseInt(start[0]) + parseInt(mark[0]);
    let m = parseInt(start[1]) + parseInt(mark[1]);
    let s = parseInt(start[2]) + parseInt(mark[2]);
    if (s >= 60) {
        m += 1;
        s -= 60;
    }
    if (m >= 60) {
        h += 1;
        m -= 60;
    }
    message.reply(`Play time: ${h}:${m}:${s}`);
}

function pauseCommand(message) {
    if (!streamStatus.playing) {
        command?.kill("SIGSTOP");
        message.reply('Paused');
        streamStatus.playing = false;
    } else {
        message.reply('Not playing');
    }
}

function resumeCommand(message) {
    if (!streamStatus.playing) {
        command?.kill("SIGCONT");
        message.reply('Resumed');
        streamStatus.playing = true;
    } else {
        message.reply('Not playing');
    }
}

function listCommand(message) {
    message.reply(`Available movies:\n${movies.map(m => m.name).join('\n')}`);
}

function statusCommand(message) {
    message.reply(`Joined: ${streamStatus.joined}\nJoin success: ${streamStatus.joinsucc}\nPlaying: ${streamStatus.playing}\nChannel: ${streamStatus.channelInfo.guildId}/${streamStatus.channelInfo.channelId}\nTimemark: ${streamStatus.timemark}\nStart time: ${streamStatus.starttime}`);
}

function refreshCommand(message) {
    // refresh movie list
    const movieFiles = fs.readdirSync(moviesFolder);
    movies = movieFiles.map(file => {
        const fileName = path.parse(file).name;
        // replace space with _
        return { name: fileName.replace(/ /g, ''), path: path.join(moviesFolder, file) };
    });
    message.reply('Movie list refreshed ' + movies.length + ' movies found.\n' + movies.map(m => m.name).join('\n'));
}

function helpCommand(message) {
    // reply all commands here
    message.reply('Available commands:\nplay [guildId]/[channelId] [movie] [start time]\nstop\nlist\nstatus\nrefresh\nplaytime\npause\nresume\nhelp');
}

async function playVideo(video: string, udpConn: VoiceUdp, options: any) {
    console.log("Started playing video");

    udpConn.voiceConnection.setSpeaking(true);
    udpConn.voiceConnection.setVideoStatus(true);
    try {
        let videoStream = streamLivestreamVideo(video, udpConn);
        command?.on('progress', (msg) => {
            // print timemark if it passed 10 second sionce last print, becareful when it pass 0
            if (streamStatus.timemark) {
                if (lastPrint != "") {
                    let last = lastPrint.split(':');
                    let now = msg.timemark.split(':');
                    // turn to seconds
                    let s = parseInt(now[2]) + parseInt(now[1]) * 60 + parseInt(now[0]) * 3600;
                    let l = parseInt(last[2]) + parseInt(last[1]) * 60 + parseInt(last[0]) * 3600;
                    if (s - l >= 10) {
                        console.log(`Timemark: ${msg.timemark}`);
                        lastPrint = msg.timemark;
                    }
                } else {
                    console.log(`Timemark: ${msg.timemark}`);
                    lastPrint = msg.timemark;
                }
            }
            streamStatus.timemark = msg.timemark;
        });
        const res = await videoStream;
        console.log("Finished playing video " + res);
    } catch (e) {
        console.log(e);
    } finally {
        udpConn.voiceConnection.setSpeaking(false);
        udpConn.voiceConnection.setVideoStatus(false);
    }
    command?.kill("SIGINT");
    // send message to channel, not reply
    (client.channels.cache.get(streamStatus.channelInfo.cmdChannelId) as TextChannel).send('Finished playing video, timemark is ' + streamStatus.timemark);
    client.leaveVoice();
    client.user?.setActivity(status_idle() as ActivityOptions)
    streamStatus.joined = false;
    streamStatus.joinsucc = false;
    streamStatus.playing = false;
    lastPrint = ""
    streamStatus.channelInfo = {
        guildId: '',
        channelId: '',
        cmdChannelId: ''
    }
}

async function streamCommand(args, message) {

    if (streamStatus.joined) {
        message.reply('Already joined');
        return;
    }

    // args = [guildId]/[channelId]
    if (args.length == 0) {
        message.reply('Missing voice channel');
        return;
    }

    // process args
    const [guildId, channelId] = args.shift()!.split('/');
    if (!guildId || !channelId) {
        message.reply('Invalid voice channel');
        return;
    }

    let url = args.shift()
    let options = {}

    await client.joinVoice(guildId, channelId);
    streamStatus.joined = true;
    streamStatus.playing = false;
    //streamStatus.starttime = startTime ? startTime : "00:00:00";
    streamStatus.channelInfo = {
        guildId: guildId,
        channelId: channelId,
        cmdChannelId: message.channel.id
    }
    const streamUdpConn = await client.createStream();
    playStream(url, streamUdpConn, options);
    message.reply('Playing url');
    client.user?.setActivity(status_watch('livestream') as ActivityOptions);
}

async function playStream(video: string, udpConn: VoiceUdp, options: any) {
    console.log("Started playing video");

    udpConn.voiceConnection.setSpeaking(true);
    udpConn.voiceConnection.setVideoStatus(true);

    try {
        console.log("Trying to stream url");
        const res = await streamLivestreamVideo(video, udpConn);
        console.log("Finished streaming url");
    } catch (e) {
        console.log(e);
    } finally {
        udpConn.voiceConnection.setSpeaking(false);
        udpConn.voiceConnection.setVideoStatus(false);
    }

    command?.kill("SIGINT");
    client.leaveVoice();
    client.user?.setActivity(status_idle() as ActivityOptions)
    streamStatus.joined = false;
    streamStatus.joinsucc = false;
    streamStatus.playing = false;
    streamStatus.channelInfo = {
        guildId: '',
        channelId: '',
        cmdChannelId: ''
    }

}

// run server if enabled in config
if (config.server.enabled) {
    // run server.js
    require('./server');
}



    


    I've tried running the code with the nocache package, setting up a cron job to clean the cache every 5 minutes, unifying functions in the code, but nothigns works.
I think that the problem has to do with certain process that never really ends after one video finishes playing, probably ffmpeg. I don't know whether is my code, my vps or the library the problem.

    


    I wanted the bot to stay in the voice channel streaming my videos 24/7 (no interruptions), I don't know how to prevent it from getting laggy after a while.

    


    This is the config.json file just in case you wanna test the code and can't find it

    


    {
    "token": "DCTOKEN",
    "videoChannels": ["ID", "OTHERID"],
    "commandChannels": ["ID", "OTHERID"],
    "adminIds": ["ID"],
    "movieFolder": "./movies/",
    "previewCache": "/tmp/preview-cache",
    "streamOpts": {
        "width": 1280,
        "height": 720,
        "fps": 30,
        "bitrateKbps": 3000,
        "hardware_acc": true
    },
    "server": {
        "enabled": false,
        "port": 8080
    }
}



    


  • Problems with outputting stream format as RTMP

    27 novembre 2023, par dongrixinyu

    I am using FFmpeg's C API to push video streams rtmp://.... into an SRS server.
    
The input stream is an MP4 file named juren-30s.mp4.
    
The output stream is also an MP4 file named juren-30s-5.mp4.

    


    My piece of code (see further down) works fine when used in the following steps :
    
mp4 -> demux -> decode -> rgb images -> encode -> mux -> mp4.

    


    Problem :

    


    When I changed the output stream to an online RTMP url named rtmp://ip:port/live/stream_nb_23 (just an example, you can change it according to your server and rules.)

    


    result : This code would be corrupted mp4 -> rtmp(flv).

    


    What I've tried :

    


    Changing the output format
    
I changed the output format param to become flv when I initialized the avformat_alloc_output_context2. But this didn't help.

    


    Debugging the output
    
When I executed ffprobe rtmp://ip:port/live/xxxxxxx, I got the following errors and did not know why :

    


    [h264 @ 0x55a925e3ba80] luma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 2
[h264 @ 0x55a925e3ba80] concealing 8003 DC, 8003 AC, 8003 MV errors in P frame
[h264 @ 0x55a925e3ba80] QP 4294966938 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 21 is out of range
[h264 @ 0x55a925e3ba80] luma_log2_weight_denom 10 is out of range
[h264 @ 0x55a925e3ba80] chroma_log2_weight_denom 12 is out of range
[h264 @ 0x55a925e3ba80] Missing reference picture, default is 0
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] QP 4294967066 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error
[h264 @ 0x55a925e3ba80] no frame!
[h264 @ 0x55a925e3ba80] QP 341 out of range
[h264 @ 0x55a925e3ba80] decode_slice_header error


    


    I am confused about the difference between MP4 and RTMP of how to use FFmpeg C-API to produce a correct output stream format.

    


    Besides, I also wanna learn how to convert video and audio streams into other formats using FFmpeg C-api, such as flv, ts, rtsp, etc.

    


    Code to reproduce the problem :

    


    


    So how to make this code output to RTMP without getting issue of an unplayable video ?

    


    #include 
#include "libavformat/avformat.h"
int main()
{
    int ret = 0; int err;

    //Open input file
    char filename[] = "juren-30s.mp4";
    AVFormatContext *fmt_ctx = avformat_alloc_context();
    if (!fmt_ctx) {
        printf("error code %d \n",AVERROR(ENOMEM));
        return ENOMEM;
    }
    if((err = avformat_open_input(&fmt_ctx, filename,NULL,NULL)) < 0){
        printf("can not open file %d \n",err);
        return err;
    }

    //Open the decoder
    AVCodecContext *avctx = avcodec_alloc_context3(NULL);
    ret = avcodec_parameters_to_context(avctx, fmt_ctx->streams[0]->codecpar);
    if (ret < 0){
        printf("error code %d \n",ret);
        return ret;
    }
    AVCodec *codec = avcodec_find_decoder(avctx->codec_id);
    if ((ret = avcodec_open2(avctx, codec, NULL)) < 0) {
        printf("open codec faile %d \n",ret);
        return ret;
    }

    //Open the output file container
    char filename_out[] = "juren-30s-5.mp4";
    AVFormatContext *fmt_ctx_out = NULL;
    err = avformat_alloc_output_context2(&fmt_ctx_out, NULL, NULL, filename_out);
    if (!fmt_ctx_out) {
        printf("error code %d \n",AVERROR(ENOMEM));
        return ENOMEM;
    }
    //Add all the way to the container context
    AVStream *st = avformat_new_stream(fmt_ctx_out, NULL);
    st->time_base = fmt_ctx->streams[0]->time_base;

    AVCodecContext *enc_ctx = NULL;
    
    AVPacket *pt = av_packet_alloc();
    AVFrame *frame = av_frame_alloc();
    AVPacket *pkt_out = av_packet_alloc();

    int frame_num = 0; int read_end = 0;
    
    for(;;){
        if( 1 == read_end ){ break;}

        ret = av_read_frame(fmt_ctx, pkt);
        //Skip and do not process audio packets
        if( 1 == pkt->stream_index ){
            av_packet_unref(pt);
            continue;
        }

        if ( AVERROR_EOF == ret) {
            //After reading the file, the data and size of pkt should be null at this time
            avcodec_send_packet(avctx, NULL);
        }else {
            if( 0 != ret){
                printf("read error code %d \n",ret);
                return ENOMEM;
            }else{
                retry:
                if (avcodec_send_packet(avctx, pkt) == AVERROR(EAGAIN)) {
                    printf("Receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
                    //Here you can consider sleeping for 0.1 seconds and returning EAGAIN. This is usually because there is a bug in ffmpeg's internal API.
                    goto retry;
                }
                //Release the encoded data in pkt
                av_packet_unref(pt);
            }

        }

        //The loop keeps reading data from the decoder until there is no more data to read.
        for(;;){
            //Read AVFrame
            ret = avcodec_receive_frame(avctx, frame);
            /* Release the YUV data in the frame,
             * Since av_frame_unref is called in the avcodec_receive_frame function, the following code can be commented.
             * So we don't need to manually unref this AVFrame
             * */
            //off_frame_unref(frame);

            if( AVERROR(EAGAIN) == ret ){
                //Prompt EAGAIN means the decoder needs more AVPackets
                //Jump out of the first layer of for and let the decoder get more AVPackets
                break;
            }else if( AVERROR_EOF == ret ){
                /* The prompt AVERROR_EOF means that an AVPacket with both data and size NULL has been sent to the decoder before.
                 * Sending NULL AVPacket prompts the decoder to flush out all cached frames.
                 * Usually a NULL AVPacket is sent only after reading the input file, or when another video stream needs to be decoded with an existing decoder.
                 *
                 * */

                /* Send null AVFrame to the encoder and let the encoder flush out the remaining data.
                 * */
                ret = avcodec_send_frame(enc_ctx, NULL);
                for(;;){
                    ret = avcodec_receive_packet(enc_ctx, pkt_out);
                    //It is impossible to return EAGAIN here, if there is any, exit directly.
                    if (ret == AVERROR(EAGAIN)){
                        printf("avcodec_receive_packet error code %d \n",ret);
                        return ret;
                    }
                    
                    if ( AVERROR_EOF == ret ){ break; }
                    
                    //Encode the AVPacket, print some information first, and then write it to the file.
                    printf("pkt_out size : %d \n",pkt_out->size);
                    //Set the stream_index of AVPacket so that you know which stream it is.
                    pkt_out->stream_index = st->index;
                    //Convert the time base of AVPacket to the time base of the output stream.
                    pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);


                    ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
                    if (ret < 0) {
                        printf("av_interleaved_write_frame faile %d \n",ret);
                        return ret;
                    }
                    av_packet_unref(pt_out);
                }
                av_write_trailer(fmt_ctx_out);
                //Jump out of the second layer of for, the file has been decoded.
                read_end = 1;
                break;
            }else if( ret >= 0 ){
                //Only when a frame is decoded can the encoder be initialized.
                if( NULL == enc_ctx ){
                    //Open the encoder and set encoding information.
                    AVCodec *encode = avcodec_find_encoder(AV_CODEC_ID_H264);
                    enc_ctx = avcodec_alloc_context3(encode);
                    enc_ctx->codec_type = AVMEDIA_TYPE_VIDEO;
                    enc_ctx->bit_rate = 400000;
                    enc_ctx->framerate = avctx->framerate;
                    enc_ctx->gop_size = 30;
                    enc_ctx->max_b_frames = 10;
                    enc_ctx->profile = FF_PROFILE_H264_MAIN;
                   
                    /*
                     * In fact, the following information is also available in the container. You can also open the encoder directly in the container at the beginning.
                     * I took these encoder parameters from AVFrame because the difference in the container is final.
                     * Because the AVFrame you decoded may go through a filter, the information will be transformed after the filter, but this article does not use filters.
                     */
                     
                    //The time base of the encoder should be the time base of AVFrame, because AVFrame is the input. The time base of AVFrame is the time base of the stream.
                    enc_ctx->time_base = fmt_ctx->streams[0]->time_base;
                    enc_ctx->width = fmt_ctx->streams[0]->codecpar->width;
                    enc_ctx->height = fmt_ctx->streams[0]->codecpar->height;
                    enc_ctx->sample_aspect_ratio = st->sample_aspect_ratio = frame->sample_aspect_ratio;
                    enc_ctx->pix_fmt = frame->format;
                    enc_ctx->color_range            = frame->color_range;
                    enc_ctx->color_primaries        = frame->color_primaries;
                    enc_ctx->color_trc              = frame->color_trc;
                    enc_ctx->colorspace             = frame->colorspace;
                    enc_ctx->chroma_sample_location = frame->chroma_location;

                    /* Note that the value of this field_order is different for different videos. I have written it here.
                     * Because the video in this article is AV_FIELD_PROGRESSIVE
                     * The production environment needs to process different videos
                     */
                    enc_ctx->field_order = AV_FIELD_PROGRESSIVE;

                    /* Now we need to copy the encoder parameters to the stream. When decoding, assign parameters from the stream to the decoder.
                     * Now let’s do it in reverse.
                     * */
                    ret = avcodec_parameters_from_context(st->codecpar,enc_ctx);
                    if (ret < 0){
                        printf("error code %d \n",ret);
                        return ret;
                    }
                    if ((ret = avcodec_open2(enc_ctx, encode, NULL)) < 0) {
                        printf("open codec faile %d \n",ret);
                        return ret;
                    }

                    //Formally open the output file
                    if ((ret = avio_open2(&fmt_ctx_out->pb, filename_out, AVIO_FLAG_WRITE,&fmt_ctx_out->interrupt_callback,NULL)) < 0) {
                        printf("avio_open2 fail %d \n",ret);
                        return ret;
                    }

                    //Write the file header first.
                    ret = avformat_write_header(fmt_ctx_out,NULL);
                    if (ret < 0) {
                        printf("avformat_write_header fail %d \n",ret);
                        return ret;
                    }

                }

                //Send AVFrame to the encoder, and then continuously read AVPacket
                ret = avcodec_send_frame(enc_ctx, frame);
                if (ret < 0) {
                    printf("avcodec_send_frame fail %d \n",ret);
                    return ret;
                }
                for(;;){
                    ret = avcodec_receive_packet(enc_ctx, pkt_out);
                    if (ret == AVERROR(EAGAIN)){ break; }
                    
                    if (ret < 0){
                    printf("avcodec_receive_packet fail %d \n",ret);
                    return ret;
                    }
                    
                    //Encode the AVPacket, print some information first, and then write it to the file.
                    printf("pkt_out size : %d \n",pkt_out->size);

                    //Set the stream_index of AVPacket so that you know which stream it is.
                    pkt_out->stream_index = st->index;
                    
                    //Convert the time base of AVPacket to the time base of the output stream.
                    pkt_out->pts = av_rescale_q_rnd(pkt_out->pts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->dts = av_rescale_q_rnd(pkt_out->dts, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                    pkt_out->duration = av_rescale_q_rnd(pkt_out->duration, fmt_ctx->streams[0]->time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

                    ret = av_interleaved_write_frame(fmt_ctx_out, pkt_out);
                    if (ret < 0) {
                        printf("av_interleaved_write_frame faile %d \n",ret);
                        return ret;
                    }
                    av_packet_unref(pt_out);
                }

            }
            else{ printf("other fail \n"); return ret;}
        }
    }
    
    av_frame_free(&frame); av_packet_free(&pt); av_packet_free(&pkt_out);
    
    //Close the encoder and decoder.
    avcodec_close(avctx); avcodec_close(enc_ctx);

    //Release container memory.
    avformat_free_context(fmt_ctx);

    //Must adjust avio_closep, otherwise the data may not be written in, it will be 0kb
    avio_closep(&fmt_ctx_out->pb);
    avformat_free_context(fmt_ctx_out);
    printf("done \n");

    return 0;
}