
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (50)
-
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Gestion des droits de création et d’édition des objets
8 février 2011, parPar défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;
Sur d’autres sites (9113)
-
Seeking with ffmpeg options fails or causes delayed playback in Discord bot
29 août 2022, par J PetersenMy Discord bot allows users to play a song starting from a timestamp.


The problem is that playback is delayed and audio plays faster and is jumbled if start times >= 30s are set.


Results from testing different start times. Same URL, 30 second duration :







 Entered Start Time (s) 

Playback Delay (s) 

Song Playback Time (s) 







 0 

3 

30 




 30 

10 

22 




 60 

17 

17 




 120 

31 

2 




 150 

120 

<1 









I am setting the start time using ffmpeg_options as suggested in this question.


Does anyone understand why the audio playback is being delayed/jumbled ? How can I improve playback delay and allow users to start in the middle of a multi-chapter YouTube video ?


Code :


import discord
import youtube_dl
import asyncio


# Suppress noise about console usage from errors
youtube_dl.utils.bug_reports_message = lambda: ""


ytdl_format_options = {
 "format": "bestaudio/best",
 "outtmpl": "%(extractor)s-%(id)s-%(title)s.%(ext)s",
 "restrictfilenames": True,
 "noplaylist": False,
 "yesplaylist": True,
 "nocheckcertificate": True,
 "ignoreerrors": False,
 "logtostderr": False,
 "quiet": True,
 "no_warnings": True,
 "default_search": "auto",
 "source_address": "0.0.0.0", # Bind to ipv4 since ipv6 addresses cause issues at certain times
}

ytdl = youtube_dl.YoutubeDL(ytdl_format_options)


class YTDLSource(discord.PCMVolumeTransformer):
 def __init__(self, source: discord.AudioSource, *, data: dict, volume: float = 0.5):
 super().__init__(source, volume)

 self.data = data

 self.title = data.get("title")
 self.url = data.get("url")

 @classmethod
 async def from_url(cls, url, *, loop=None, stream=False, timestamp = 0):
 ffmpeg_options = {
 "options": f"-vn -ss {timestamp}"}

 loop = loop or asyncio.get_event_loop()

 data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))
 if "entries" in data:
 # Takes the first item from a playlist
 data = data["entries"][0]

 filename = data["url"] if stream else ytdl.prepare_filename(data)
 return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)


intents = discord.Intents.default()

bot = discord.Bot(intents=intents)

@bot.slash_command()
async def play(ctx, audio: discord.Option(), seconds: discord.Option(), timestamp: discord.Option()):
 channel = ctx.author.voice.channel
 voice = await channel.connect()
 player = await YTDLSource.from_url(audio, loop=bot.loop, stream=True, timestamp=int(timestamp))
 voice.play(player)
 await asyncio.sleep(int(seconds))
 await voice.disconnect()

token = token value
bot.run(token)



-
How can I improve the up-time of my coffee pot live stream ?
26 avril 2017, par tww0003Some Background on the Project :
Like most software developers I depend on coffee to keep me running, and so do my coworkers. I had an old iPhone sitting around, so I decided to pay homage to the first webcam and live stream my office coffee pot.
The stream has become popular within my company, so I want to make sure it will stay online with as little effort possible on my part. As of right now, it will occasionally go down, and I have to manually get it up and running again.
My Setup :
I have nginx set up on a digital ocean server (my nginx.conf is shown below), and downloaded an rtmp streaming app for my iPhone.
The phone is set to stream to
example.com/live/stream
and then I use an ffmpeg command to take that stream, strip the audio (the live stream is public and I don’t want coworkers to feel like they have to be careful about what they say), and then make it accessible atrtmp://example.com/live/coffee
andexample.com/hls/coffee.m3u8
.Since I’m not too familiar with ffmpeg, I had to google around and find the appropriate command to strip the coffee stream of the audio and I found this :
ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee
Essentially all I know about this command is that the input stream comes from,
localhost/live/stream
, it strips the audio with-an
, and then it outputs tortmp://localhost/live/coffee
.I would assume that
ffmpeg -i rtmp://localhost/live/stream -an rtmp://localhost/live/coffee
would have the same effect, but the page I found the command on was dealing with ffmpeg, and nginx, so I figured the extra parameters were useful.What I’ve noticed with this command is that it will error out, taking the live stream down. I wrote a small bash script to rerun the command when it stops, but I don’t think this is the best solution.
Here is the bash script :
while true;
do
ffmpeg -i rtmp://localhost/live/stream -vcodec libx264 -vprofile baseline -acodec aac -strict -2 -f flv -an rtmp://localhost/live/coffee
echo 'Something went wrong. Retrying...'
sleep 1
doneI’m curious about 2 things :
- What is the best way to strip audio from an rtmp stream ?
- What is the proper configuration for nginx to ensure that my rtmp stream will stay up for as long as possible ?
Since I have close to 0 experience with nginx, ffmpeg, and rtmp streaming any help, or tips would be appreciated.
Here is my nginx.conf file :
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location /stat {
rtmp_stat all;
rtmp_stat_stylesheet stat.xsl;
allow 127.0.0.1;
}
location /stat.xsl {
root html;
}
location /hls {
root /tmp;
add_header Cache-Control no-cache;
}
location /dash {
root /tmp;
add_header Cache-Control no-cache;
add_header Access-Control-Allow-Origin *;
}
}
}
rtmp {
server {
listen 1935;
chunk_size 4000;
application live {
live on;
hls on;
hls_path /tmp/hls;
dash on;
dash_path /tmp/dash;
}
}
}edit :
I’m also running into this same issue : https://trac.ffmpeg.org/ticket/4401 -
How to read a UDP stream and forward it as SRT ?
27 décembre 2020, par andrea-fDuring this holidays I started a small hobby project to learn how SRT works. I got a simple Android app set up with NodeMediaClient (https://github.com/NodeMedia/NodeMediaClient-Android/tree/master/nodemediaclient/src) which publishes an UDP stream, which I read by :


private final Object txFrameLock = new Object();
 [...]

 t = new Thread(new Runnable() {
 public void run() {

 byte[] message = new byte[MAX_UDP_DATAGRAM_LEN];

 try {
 socket = new DatagramSocket(UDP_SERVER_PORT);
 while (!Thread.currentThread().isInterrupted()) {
 while (!socket.isClosed()) {
 DatagramPacket packet = new DatagramPacket(message, message.length);
 socket.receive(packet);
 
 ByteBuffer tsUDPPack = ByteBuffer.wrap(packet.getData());
 int ret = parseTSPack(tsUDPPack);
 
 Log.i("srtstreaming SRT packet sent", String.format("%d", ret));
 }
 synchronized (txFrameLock) {
 try {
 txFrameLock.wait(10);
 //Thread.sleep(500);

 } catch (InterruptedException ie) {
 t.interrupt();
 }
 }
 }

 } catch (SocketException e) {
 e.printStackTrace();
 } catch (IOException e) {
 e.printStackTrace();
 } finally {
 if (socket != null) {
 socket.close();
 }
 }
 }
 });
 t.start();



parseTSPack looks like :


private int parseTSPack(ByteBuffer tsUDPPack)
 {
 byte[] ts_pack = new byte[TS_PACK_LEN];
 if (tsUDPPack.remaining() != TS_UDP_PACK_LEN) {
 Log.i(TAG, "srtestreaming ts udp len is not 1316.");
 return 0;
 }
 int count = 0;
 while (tsUDPPack.remaining() > 0) {
 tsUDPPack.get(ts_pack);
 int ret = mSrt.send(ts_pack);
 count++;
 Log.i("srtstreaming ts packets ", String.format("%d", count));
 }
 return count;
 }



Then the JNI implementation takes care of opening the stream (
srt://192.168.1.238:4200?streamid=test/live/1234
) and then handling themSrt.send(ts_pack)
call to forward the packets to SRT caller.
On the receiving side I am using :ffmpeg -v 9 -loglevel 99 -report -re -i 'srt://192.168.1.238:4200?streamid=test/live/1234&mode=listener' -c copy -copyts -f mpegts ./srt_recording_ffmpeg.ts
the video arrives broken and no frames can be decoded.
In FFMPEG output I am getting something along the lines :

[mpegts @ 0x7f9a0f008200] Probe: 176, score: 1, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 364, score: 2, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 552, score: 3, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Probe: 740, score: 3, dvhs_score: 1, fec_score: 1 
[mpegts @ 0x7f9a0f008200] Packet corrupt (stream = 1, dts = 980090).
[mpegts @ 0x7f9a0f008200] rfps: 7.583333 0.011807
[mpegts @ 0x7f9a0f008200] rfps: 8.250000 0.014541
[...]
[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1193274, current: 1189094; changing to 1193275. This may result in incorrect timestamps in the output file.
[mpegts @ 0x7f9a10809000] Non-monotonous DTS in output stream 0:1; previous: 1199543, current: 1199543; changing to 1199544. This may result in incorrect timestamps in the output file.



Any ideas how to correctly forward the udp packets to the SRT library ?