Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (50)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (9113)

  • Seeking with ffmpeg options fails or causes delayed playback in Discord bot

    29 août 2022, par J Petersen

    My Discord bot allows users to play a song starting from a timestamp.

    


    The problem is that playback is delayed and audio plays faster and is jumbled if start times >= 30s are set.

    


    Results from testing different start times. Same URL, 30 second duration :

    


    





    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Entered Start Time (s) Playback Delay (s) Song Playback Time (s)
    0 3 30
    30 10 22
    60 17 17
    120 31 2
    150 120 <1

    &#xA;

    &#xA;

    I am setting the start time using ffmpeg_options as suggested in this question.

    &#xA;

    Does anyone understand why the audio playback is being delayed/jumbled ? How can I improve playback delay and allow users to start in the middle of a multi-chapter YouTube video ?

    &#xA;

    Code :

    &#xA;

    import discord&#xA;import youtube_dl&#xA;import asyncio&#xA;&#xA;&#xA;# Suppress noise about console usage from errors&#xA;youtube_dl.utils.bug_reports_message = lambda: ""&#xA;&#xA;&#xA;ytdl_format_options = {&#xA;    "format": "bestaudio/best",&#xA;    "outtmpl": "%(extractor)s-%(id)s-%(title)s.%(ext)s",&#xA;    "restrictfilenames": True,&#xA;    "noplaylist": False,&#xA;    "yesplaylist": True,&#xA;    "nocheckcertificate": True,&#xA;    "ignoreerrors": False,&#xA;    "logtostderr": False,&#xA;    "quiet": True,&#xA;    "no_warnings": True,&#xA;    "default_search": "auto",&#xA;    "source_address": "0.0.0.0",  # Bind to ipv4 since ipv6 addresses cause issues at certain times&#xA;}&#xA;&#xA;ytdl = youtube_dl.YoutubeDL(ytdl_format_options)&#xA;&#xA;&#xA;class YTDLSource(discord.PCMVolumeTransformer):&#xA;    def __init__(self, source: discord.AudioSource, *, data: dict, volume: float = 0.5):&#xA;        super().__init__(source, volume)&#xA;&#xA;        self.data = data&#xA;&#xA;        self.title = data.get("title")&#xA;        self.url = data.get("url")&#xA;&#xA;    @classmethod&#xA;    async def from_url(cls, url, *, loop=None, stream=False, timestamp = 0):&#xA;        ffmpeg_options = {&#xA;            "options": f"-vn -ss {timestamp}"}&#xA;&#xA;        loop = loop or asyncio.get_event_loop()&#xA;&#xA;        data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))&#xA;        if "entries" in data:&#xA;            # Takes the first item from a playlist&#xA;            data = data["entries"][0]&#xA;&#xA;        filename = data["url"] if stream else ytdl.prepare_filename(data)&#xA;        return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)&#xA;&#xA;&#xA;intents = discord.Intents.default()&#xA;&#xA;bot = discord.Bot(intents=intents)&#xA;&#xA;@bot.slash_command()&#xA;async def play(ctx, audio: discord.Option(), seconds: discord.Option(), timestamp: discord.Option()):&#xA;    channel = ctx.author.voice.channel&#xA;    voice = await channel.connect()&#xA;    player = await YTDLSource.from_url(audio, loop=bot.loop, stream=True, timestamp=int(timestamp))&#xA;    voice.play(player)&#xA;    await asyncio.sleep(int(seconds))&#xA;    await voice.disconnect()&#xA;&#xA;token = token value&#xA;bot.run(token)&#xA;

    &#xA;

  • How can I improve the up-time of my coffee pot live stream ?

    26 avril 2017, par tww0003

    Some Background on the Project :

    Like most software developers I depend on coffee to keep me running, and so do my coworkers. I had an old iPhone sitting around, so I decided to pay homage to the first webcam and live stream my office coffee pot.

    The stream has become popular within my company, so I want to make sure it will stay online with as little effort possible on my part. As of right now, it will occasionally go down, and I have to manually get it up and running again.

    My Setup :

    I have nginx set up on a digital ocean server (my nginx.conf is shown below), and downloaded an rtmp streaming app for my iPhone.

    The phone is set to stream to example.com/live/stream and then I use an ffmpeg command to take that stream, strip the audio (the live stream is public and I don’t want coworkers to feel like they have to be careful about what they say), and then make it accessible at rtmp://example.com/live/coffee and example.com/hls/coffee.m3u8.

    Since I’m not too familiar with ffmpeg, I had to google around and find the appropriate command to strip the coffee stream of the audio and I found this :

    ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee

    Essentially all I know about this command is that the input stream comes from, localhost/live/stream, it strips the audio with -an, and then it outputs to rtmp://localhost/live/coffee.

    I would assume that ffmpeg -i rtmp://localhost/live/stream -an rtmp://localhost/live/coffee would have the same effect, but the page I found the command on was dealing with ffmpeg, and nginx, so I figured the extra parameters were useful.

    What I’ve noticed with this command is that it will error out, taking the live stream down. I wrote a small bash script to rerun the command when it stops, but I don’t think this is the best solution.

    Here is the bash script :

    while true;
    do
           ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee
           echo 'Something went wrong. Retrying...'
           sleep 1
    done

    I’m curious about 2 things :

    1. What is the best way to strip audio from an rtmp stream ?
    2. What is the proper configuration for nginx to ensure that my rtmp stream will stay up for as long as possible ?

    Since I have close to 0 experience with nginx, ffmpeg, and rtmp streaming any help, or tips would be appreciated.

    Here is my nginx.conf file :

    worker_processes  1;

    events {
       worker_connections  1024;
    }


    http {
       include       mime.types;
       default_type  application/octet-stream;

       sendfile        on;

       keepalive_timeout  65;

       server {
           listen       80;
           server_name  localhost;

           location / {
               root   html;
               index  index.html index.htm;
           }

           error_page   500 502 503 504  /50x.html;
           location = /50x.html {
               root   html;
           }

           location /stat {
                   rtmp_stat all;
                   rtmp_stat_stylesheet stat.xsl;
                   allow 127.0.0.1;
           }
           location /stat.xsl {
                   root html;
           }
           location /hls {
                   root /tmp;
                   add_header Cache-Control no-cache;
           }
           location /dash {
                   root /tmp;
                   add_header Cache-Control no-cache;
                   add_header Access-Control-Allow-Origin *;
           }
       }
    }

    rtmp {

       server {

           listen 1935;
           chunk_size 4000;

           application live {
               live on;

               hls on;
               hls_path /tmp/hls;

               dash on;
               dash_path /tmp/dash;
           }
       }
    }

    edit :
    I’m also running into this same issue : https://trac.ffmpeg.org/ticket/4401

  • How to read a UDP stream and forward it as SRT ?

    27 décembre 2020, par andrea-f

    During this holidays I started a small hobby project to learn how SRT works. I got a simple Android app set up with NodeMediaClient (https://github.com/NodeMedia/NodeMediaClient-Android/tree/master/nodemediaclient/src) which publishes an UDP stream, which I read by :

    &#xA;

         private final Object txFrameLock = new Object();&#xA;     [...]&#xA;&#xA;     t = new Thread(new Runnable() {&#xA;        public void run() {&#xA;&#xA;            byte[] message = new byte[MAX_UDP_DATAGRAM_LEN];&#xA;&#xA;            try {&#xA;                socket = new DatagramSocket(UDP_SERVER_PORT);&#xA;                while (!Thread.currentThread().isInterrupted()) {&#xA;                    while (!socket.isClosed()) {&#xA;                        DatagramPacket packet = new DatagramPacket(message, message.length);&#xA;                        socket.receive(packet);&#xA;                       &#xA;                        ByteBuffer tsUDPPack = ByteBuffer.wrap(packet.getData());&#xA;                        int ret = parseTSPack(tsUDPPack);&#xA;                        &#xA;                        Log.i("srtstreaming SRT packet sent", String.format("%d", ret));&#xA;                    }&#xA;                    synchronized (txFrameLock) {&#xA;                        try {&#xA;                            txFrameLock.wait(10);&#xA;                            //Thread.sleep(500);&#xA;&#xA;                        } catch (InterruptedException ie) {&#xA;                            t.interrupt();&#xA;                        }&#xA;                    }&#xA;                }&#xA;&#xA;            } catch (SocketException e) {&#xA;                e.printStackTrace();&#xA;            } catch (IOException e) {&#xA;                e.printStackTrace();&#xA;            } finally {&#xA;                if (socket != null) {&#xA;                    socket.close();&#xA;                }&#xA;            }&#xA;        }&#xA;    });&#xA;    t.start();&#xA;

    &#xA;

    parseTSPack looks like :

    &#xA;

        private int parseTSPack(ByteBuffer tsUDPPack)&#xA;    {&#xA;        byte[] ts_pack = new byte[TS_PACK_LEN];&#xA;        if (tsUDPPack.remaining() != TS_UDP_PACK_LEN) {&#xA;            Log.i(TAG, "srtestreaming ts udp len is not 1316.");&#xA;            return 0;&#xA;        }&#xA;        int count = 0;&#xA;        while (tsUDPPack.remaining() > 0) {&#xA;            tsUDPPack.get(ts_pack);&#xA;            int ret = mSrt.send(ts_pack);&#xA;            count&#x2B;&#x2B;;&#xA;            Log.i("srtstreaming ts packets ", String.format("%d", count));&#xA;        }&#xA;        return count;&#xA;    }&#xA;

    &#xA;

    Then the JNI implementation takes care of opening the stream (srt://192.168.1.238:4200?streamid=test/live/1234) and then handling the mSrt.send(ts_pack) call to forward the packets to SRT caller.&#xA;On the receiving side I am using : ffmpeg -v 9 -loglevel 99 -report -re -i &#x27;srt://192.168.1.238:4200?streamid=test/live/1234&amp;mode=listener&#x27; -c copy -copyts -f mpegts ./srt_recording_ffmpeg.ts the video arrives broken and no frames can be decoded.&#xA;In FFMPEG output I am getting something along the lines :

    &#xA;

    [mpegts @ 0x7f9a0f008200] Probe: 176, score: 1, dvhs_score: 1, fec_score: 1 &#xA;[mpegts @ 0x7f9a0f008200] Probe: 364, score: 2, dvhs_score: 1, fec_score: 1 &#xA;[mpegts @ 0x7f9a0f008200] Probe: 552, score: 3, dvhs_score: 1, fec_score: 1 &#xA;[mpegts @ 0x7f9a0f008200] Probe: 740, score: 3, dvhs_score: 1, fec_score: 1 &#xA;[mpegts @ 0x7f9a0f008200] Packet corrupt (stream = 1, dts = 980090).&#xA;[mpegts @ 0x7f9a0f008200] rfps: 7.583333 0.011807&#xA;[mpegts @ 0x7f9a0f008200] rfps: 8.250000 0.014541&#xA;[...]&#xA;[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1193274, current: 1189094; changing to 1193275. This may result in incorrect timestamps in the output file.&#xA;[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1199543, current: 1199543; changing to 1199544. This may result in incorrect timestamps in the output file.&#xA;

    &#xA;

    Any ideas how to correctly forward the udp packets to the SRT library ?

    &#xA;