
Recherche avancée
Médias (16)
-
#7 Ambience
16 octobre 2011, par
Mis à jour : Juin 2015
Langue : English
Type : Audio
-
#6 Teaser Music
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#5 End Title
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#3 The Safest Place
16 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#4 Emo Creates
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
-
#2 Typewriter Dance
15 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Audio
Autres articles (69)
-
L’agrémenter visuellement
10 avril 2011MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté. -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (6777)
-
uploading video to ftp while transcoding with Xuggler in JAVA on multiple platforms
27 novembre 2013, par HollyI'm trying to transcode a video on a client PC (Win, Linux, Mac OS - x32 or x64 - all 6) and write the output directly to a ftp server using the JAVA lib Xuggler. With pure ffmpeg it would look like this :
ffmpeg -i "local.mp4" -ftp-write-seekable 0 -c:v libx264 -crf 25 -f flv ftp://user:pass@server.net:1234/uploaded.flv
I'm assuming that it must be possible since ffmpeg can do it and Xuggler is supposed to be a wrapper for ffmpeg. I got it working using
exec("ffmeg")
but it needs to work on all 6 OS mentioned above.I tried to addapt this example : https://groups.google.com/forum/# !msg/xuggler-users/QeFTxqgc8Bg/0j1ntsl3tI0J by just using a
ftp://user:pass@server.net:1234/uploaded.flv
as the url but that does not work. Xuggler is unable to create a container based on such an url.I guess I should be able to write into an OutputStream and have ftp4j read from that stream and write to ftp. What would I need to consider for that ?
Failing all else I could write into a temp file and upload that, but I really don't like that as a solution.
-
Can someone please provide me with a code example of how to make Xuggler write to ftp directly ?
-
As a side question. I'm still looking into how to best capture data for displaying the progress in the GUI. So far I found :
IMediaListener onWritePacket(IWritePacketEvent event)
which I would register to the writer. However I'm not sure how to get the details ffmped reports (current compression, output size so far). The only useful info I found so far is the TimeStamp. Any help on that would also be most welcome.
-
-
Muxing raw h264 + aac into mp4 file with av_interleaved_write_frame() returning 0 but the video is not playable
3 avril 2020, par Jaw109I have a program [1] that muxing audio and video into mp4 file(in idividual worker thread, retrieving audio/video frame from a streaming daemon). The audio is perfectly played in VLC, but the video is not playable, VLC debug logs show the start-code of video frame is not found.



I have another demuxing program [2] to retrieve all the frame to see what had happened. I found the video frame is modified



00000001 674D0029... was modified into 00000019 674D0029... (framesize is 29)
00000001 68EE3C80... was modified into 00000004 68EE3C80... (framesize is 8)
00000001 65888010... was modified into 0002D56F 65888010... (framesize is 185715)
00000001 619A0101... was modified into 00003E1E 619A0101... (framesize is 15906)
00000001 619A0202... was modified into 00003E3C 619A0202... (framesize is 15936)
00000001 619A0303... was modified into 00003E1E 619A0303... (framesize is 15581)




It seems like the h264 start-code was replaced with something like... frame-size. but why ? Is there anything I did wrongly ? (Any idea ? something flags ? AVPacket initialization ? AVPacket's data copy wrongly ?)



[1] muxing program



int go_on = 1;
std::mutex g_mutex;
AVStream* g_AudioStream = NULL;
AVStream* g_VideoStream = NULL;

int polling_ringbuffer(int stream_type);

int main(int argc, char** argv)
{

 AVFormatContext* pFmtCntx = avformat_alloc_context();
 avio_open(&pFmtCntx->pb, argv[1], AVIO_FLAG_WRITE);
 pFmtCntx->oformat = av_guess_format(NULL, argv[1], NULL);
 g_AudioStream = avformat_new_stream( pFmtCntx, NULL );
 g_VideoStream = avformat_new_stream( pFmtCntx, NULL );
 initAudioStream(g_AudioStream->codecpar);
 initVideoStream(g_VideoStream->codecpar);
 avformat_write_header(pFmtCntx, NULL);

 std::thread audio(polling_ringbuffer, AUDIO_RINGBUFFER);
 std::thread video(polling_ringbuffer, VIDEO_RINGBUFFER);

 audio.join();
 video.join();

 av_write_trailer(pFmtCntx);
 if ( pFmtCntx->oformat && !( pFmtCntx->oformat->flags & AVFMT_NOFILE ) && pFmtCntx->pb )
 avio_close( pFmtCntx->pb );
 avformat_free_context( g_FmtCntx );

 return 0;
}

int polling_ringbuffer(int stream_type)
{
 uint8_t* data = new uint8_t[1024*1024];
 int64_t timestamp = 0;
 int data_len = 0;
 while(go_on)
 {
 const std::lock_guard lock(g_mutex);
 data_len = ReadRingbuffer(stream_type, data, 1024*1024, &timestamp);

 AVPacket pkt = {0};
 av_init_packet(&pkt);
 pkt.data = data;
 pkt.size = data_len;

 static AVRational r = {1,1000};
 switch(stream_type)
 {
 case STREAMTYPE_AUDIO:
 pkt.stream_index = g_AudioStream->index;
 pkt.flags = 0;
 pkt.pts = av_rescale_q(timestamp, r, g_AudioStream->time_base);
 break;
 case STREAMTYPE_VIDEO:
 pkt.stream_index = g_VIDEOStream->index;
 pkt.flags = isKeyFrame(data, data_len)?AV_PKT_FLAG_KEY:0;
 pkt.pts = av_rescale_q(timestamp, r, g_VideoStream->time_base);
 break;
 }
 static int64_t lastPTS = 0;
 pkt.dts = pkt.pts;
 pkt.duration = (lastPTS==0)? 0 : (pkt.pts-lastPTS);
 lastPTS = pkt.pts;

 int ret = av_interleaved_write_frame(g_FmtCntx, &pkt);
 if(0!=ret)
 printf("[%s:%d] av_interleaved_write_frame():%d\n", __FILE__, __LINE__, ret);
 }

 return 0;
}




[2] demuxing program



int main(int argc, char** argv)
{
 AVFormatContext* pFormatCtx = avformat_alloc_context();
 AVPacket pkt;
 av_init_packet(&pkt);
 avformat_open_input(&pFormatCtx, argv[1], NULL, NULL);
 for(;;)
 {
 if (av_read_frame(pFormatCtx, &pkt) >= 0)
 {
 printf("[%d] %s (len:%d)\n", pkt.stream_index, BinToHex(pkt.data, MIN(64, pkt.size)), pkt.size );
 }
 else
 break;
 }

 avformat_close_input(&pFormatCtx);
 return 0;
}




[3] Here are my environment



Linux MY-RASP-4 4.14.98 #1 SMP Mon Jun 24 12:34:42 UTC 2019 armv7l GNU/Linux
ffmpeg version 4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.0 (GCC)

libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100



-
Discord bot returning odd error message and not playing sound
26 février 2020, par Ravenr_I am attempting to create a function of my discord bot that will join your voice channel then play something from youtube as specified in the command
i.e.
$play <youtube link="link"></youtube>
The problem is that my bot joins the voice channel but doesn’t play any sound and outputs an error to the console that I dont know how to fix
My Code :
const ytdl = require("ytdl-core");
module.exports = {
name: 'play',
description: 'initiates music methods of the bot',
execute(msg, args){
var servers = {};
function play(connection, msg){
var server = servers[msg.guild.id];
server.dispatcher = connection.playStream(ytdl(server.queue[0], {filter: "audioonly"}));
server.queue.shift();
server.dispatcher.on("end", function(){
if(server.queue[0]){
play(connection, msg);
}else{
connection.disconnect();
}
});
}
if(!args[1]){
return msg.channel.send("you need to provide a link");
}
if(!msg.member.voiceChannel){
return msg.channel.send("You must be in a voice channel to use this feature");
}
if(!servers[msg.guild.id]) servers[msg.guild.id] = {
queue: []
}
var server = servers[msg.guild.id];
server.queue.push(args[1]);
if(!msg.guild.voiceConnection) msg.member.voiceChannel.join().then(function(connection){
play(connection, msg);
})
}
}The Error :
2020-02-26T16:31:59.458215+00:00 app[worker.1]: (node:4) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_ARG_TYPE]: The "file" argument must be of type string. Received an instance of Object
2020-02-26T16:31:59.458223+00:00 app[worker.1]: at validateString (internal/validators.js:117:11)
2020-02-26T16:31:59.458224+00:00 app[worker.1]: at normalizeSpawnArguments (child_process.js:406:3)
2020-02-26T16:31:59.458224+00:00 app[worker.1]: at Object.spawn (child_process.js:542:16)
2020-02-26T16:31:59.458225+00:00 app[worker.1]: at new FfmpegProcess (/app/node_modules/prism-media/src/transcoders/ffmpeg/FfmpegProcess.js:14:33)
2020-02-26T16:31:59.458225+00:00 app[worker.1]: at FfmpegTranscoder.transcode (/app/node_modules/prism-media/src/transcoders/ffmpeg/Ffmpeg.js:34:18)
2020-02-26T16:31:59.458226+00:00 app[worker.1]: at MediaTranscoder.transcode (/app/node_modules/prism-media/src/transcoders/MediaTranscoder.js:27:31)
2020-02-26T16:31:59.458226+00:00 app[worker.1]: at Prism.transcode (/app/node_modules/prism-media/src/Prism.js:13:28)
2020-02-26T16:31:59.458227+00:00 app[worker.1]: at AudioPlayer.playUnknownStream (/app/node_modules/discord.js/src/client/voice/player/AudioPlayer.js:97:35)
2020-02-26T16:31:59.458231+00:00 app[worker.1]: at VoiceConnection.playStream (/app/node_modules/discord.js/src/client/voice/VoiceConnection.js:478:24)
2020-02-26T16:31:59.458232+00:00 app[worker.1]: at play (/app/commands/play.js:11:44)for reference these are the links I tested it with 1 & 2
I’m not sure how to make queue[0] a string, which is what I assume the problem is.
I was thinking of using a toString() i.e.
server.queue[0].toString()
but I think that will just return the memory address.If anyone can help me know what the issue is or how to fix it, that would be great.