
Recherche avancée
Médias (1)
-
Revolution of Open-source and film making towards open film making
6 octobre 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (68)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (11230)
-
ffmpeg for Android build shows error
2 avril 2015, par JamalI would like to build
ffmpeg
forAndroid
so I am following a steps from this tutorial http://www.roman10.net/how-to-build-ffmpeg-with-ndk-r9/What I tried
- downloaded
android-ndk-r10d-darwin-x86_64.bin
for Mac 64 bit OS - decompressed/extract the
NDK archieve file
- downloaded
ffmpeg 2.6.1
- decompressed/extract the `ffmpeg 2.6.1 file
- copied the
ffmpeg 2.6.1
extracted folder and pasted intoandroid-ndk-r10d/sources/
location -
Opened the
ffmpeg-2.6.1/configure
file and replaced this codeSLIBNAME_WITH_MAJOR=’$(SLIBNAME).$(LIBMAJOR)’
LIB_INSTALL_EXTRA_CMD=’$$(RANLIB) "$(LIBDIR)/$(LIBNAME)"’
SLIB_INSTALL_NAME=’$(SLIBNAME_WITH_VERSION)’
SLIB_INSTALL_LINKS=’$(SLIBNAME_WITH_MAJOR) $(SLIBNAME)’
with
SLIBNAME_WITH_MAJOR='$(SLIBPREF)$(FULLNAME)-$(LIBMAJOR)$(SLIBSUF)'
LIB_INSTALL_EXTRA_CMD='$$(RANLIB) "$(LIBDIR)/$(LIBNAME)"'
SLIB_INSTALL_NAME='$(SLIBNAME_WITH_MAJOR)'
SLIB_INSTALL_LINKS='$(SLIBNAME)'7.Created the
build_android.sh
file insideffmpeg-2.6.1
folder so now it be like thisffmpeg-2.6.1/build_android.sh
.
instead of thisNDK=$HOME/Desktop/adt/android-ndk-r9
line I set current NDK location
8. executed thissudo chmod +x build_android.sh
command- while executing this command
./build_android.sh
got below error
Hubs-Mac-mini:ffmpeg-2.6.1 hubmaci7$ ./build_android.sh
Configured with : —prefix=/Applications/Xcode.app/Contents/Developer/usr —with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1
Configured with : —prefix=/Applications/Xcode.app/Contents/Developer/usr —with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1
/hubmaci7/Pictures/Hubino/Jamal/android-ndk-r10d/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc is unable to create an executable file.
C compiler test failed.If you think configure made a mistake, make sure you are using the latest
version from Git. If the latest version fails, report the problem to the
ffmpeg-user@ffmpeg.org mailing list or IRC #ffmpeg on irc.freenode.net.
Include the log file "config.log" produced by configure as this will help
solve the problem.
Makefile:2: config.mak: No such file or directory
Makefile:59: /common.mak: No such file or directory
Makefile:100: /libavutil/Makefile: No such file or directory
Makefile:100: /library.mak: No such file or directory
Makefile:102: /doc/Makefile: No such file or directory
Makefile:185: /tests/Makefile: No such file or directory
make: *** No rule to make target `/tests/Makefile'. Stop.
Makefile:2: config.mak: No such file or directory
Makefile:59: /common.mak: No such file or directory
Makefile:100: /libavutil/Makefile: No such file or directory
Makefile:100: /library.mak: No such file or directory
Makefile:102: /doc/Makefile: No such file or directory
Makefile:185: /tests/Makefile: No such file or directory
make: *** No rule to make target `/tests/Makefile'. Stop.
Makefile:2: config.mak: No such file or directory
Makefile:59: /common.mak: No such file or directory
Makefile:100: /libavutil/Makefile: No such file or directory
Makefile:100: /library.mak: No such file or directory
Makefile:102: /doc/Makefile: No such file or directory
Makefile:185: /tests/Makefile: No such file or directory
make: *** No rule to make target `/tests/Makefile'. Stop.
Hubs-Mac-mini:ffmpeg-2.6.1 hubmaci7$please provide a needed solution for me to solve this errors
- downloaded
-
Discord.py repository example : bot can join voice channel, shows voice activity (green ring), but no sound is produced ?
19 juin 2023, par Jared RobertsonI copied an example from the discord.py repository to test out a bot with music capabilities. Though the code enables YouTube playback, I am only concerned with local audio playback. See code here : https://github.com/Rapptz/discord.py/blob/v2.3.0/examples/basic_voice.py. I tested this code as is with only the Discord token portion updated (from constants.py). See code below :


# This example requires the 'message_content' privileged intent to function.

#my constants.py with API keys, tokens, etc. stored
import constants

import asyncio

import discord
import youtube_dl

from discord.ext import commands

# Suppress noise about console usage from errors
youtube_dl.utils.bug_reports_message = lambda: ''


ytdl_format_options = {
 'format': 'bestaudio/best',
 'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
 'restrictfilenames': True,
 'noplaylist': True,
 'nocheckcertificate': True,
 'ignoreerrors': False,
 'logtostderr': False,
 'quiet': True,
 'no_warnings': True,
 'default_search': 'auto',
 'source_address': '0.0.0.0', # bind to ipv4 since ipv6 addresses cause issues sometimes
}

ffmpeg_options = {
 'options': '-vn',
}

ytdl = youtube_dl.YoutubeDL(ytdl_format_options)


class YTDLSource(discord.PCMVolumeTransformer):
 def __init__(self, source, *, data, volume=0.5):
 super().__init__(source, volume)

 self.data = data

 self.title = data.get('title')
 self.url = data.get('url')

 @classmethod
 async def from_url(cls, url, *, loop=None, stream=False):
 loop = loop or asyncio.get_event_loop()
 data = await loop.run_in_executor(None, lambda: ytdl.extract_info(url, download=not stream))

 if 'entries' in data:
 # take first item from a playlist
 data = data['entries'][0]

 filename = data['url'] if stream else ytdl.prepare_filename(data)
 return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)


class Music(commands.Cog):
 def __init__(self, bot):
 self.bot = bot

 @commands.command()
 async def join(self, ctx, *, channel: discord.VoiceChannel):
 """Joins a voice channel"""

 if ctx.voice_client is not None:
 return await ctx.voice_client.move_to(channel)

 await channel.connect()

 @commands.command()
 async def play(self, ctx, *, query):
 """Plays a file from the local filesystem"""

 source = discord.PCMVolumeTransformer(discord.FFmpegPCMAudio(query))
 ctx.voice_client.play(source, after=lambda e: print(f'Player error: {e}') if e else None)

 await ctx.send(f'Now playing: {query}')

 @commands.command()
 async def yt(self, ctx, *, url):
 """Plays from a url (almost anything youtube_dl supports)"""

 async with ctx.typing():
 player = await YTDLSource.from_url(url, loop=self.bot.loop)
 ctx.voice_client.play(player, after=lambda e: print(f'Player error: {e}') if e else None)

 await ctx.send(f'Now playing: {player.title}')

 @commands.command()
 async def stream(self, ctx, *, url):
 """Streams from a url (same as yt, but doesn't predownload)"""

 async with ctx.typing():
 player = await YTDLSource.from_url(url, loop=self.bot.loop, stream=True)
 ctx.voice_client.play(player, after=lambda e: print(f'Player error: {e}') if e else None)

 await ctx.send(f'Now playing: {player.title}')

 @commands.command()
 async def volume(self, ctx, volume: int):
 """Changes the player's volume"""

 if ctx.voice_client is None:
 return await ctx.send("Not connected to a voice channel.")

 ctx.voice_client.source.volume = volume / 100
 await ctx.send(f"Changed volume to {volume}%")

 @commands.command()
 async def stop(self, ctx):
 """Stops and disconnects the bot from voice"""

 await ctx.voice_client.disconnect()

 @play.before_invoke
 @yt.before_invoke
 @stream.before_invoke
 async def ensure_voice(self, ctx):
 if ctx.voice_client is None:
 if ctx.author.voice:
 await ctx.author.voice.channel.connect()
 else:
 await ctx.send("You are not connected to a voice channel.")
 raise commands.CommandError("Author not connected to a voice channel.")
 elif ctx.voice_client.is_playing():
 ctx.voice_client.stop()


intents = discord.Intents.default()
intents.message_content = True

bot = commands.Bot(
 command_prefix=commands.when_mentioned_or("!"),
 description='Relatively simple music bot example',
 intents=intents,
)


@bot.event
async def on_ready():
 print(f'Logged in as {bot.user} (ID: {bot.user.id})')
 print('------')


async def main():
 async with bot:
 await bot.add_cog(Music(bot))
 #referenced my discord token in constants.py
 await bot.start(constants.discord_token)


asyncio.run(main())



This code should cause the Discord bot to join the voice channel and play a local audio file when the message " !play query" is entered into the Discord text channel, with query in my case being "test.mp3." When " !play test.mp3" is entered, the bot does join the voice channel and appears to generate voice activity, however no sound is heard. No errors are thrown in the output. The bot simply continues on silently.


Here's what I've checked and tried :


- 

-
I am in the Discord voice channel. The code will alert that user is not in the voice channel if a user attempts to summon the bot without being in a voice channel.


-
FFmpeg is installed and added to PATH. I even stored a copy of the .exe in the project folder.


-
I've tried specifying the full path of both FFmpeg (adding the "executable=" arg to the play function) and the audio file (stored at C :/test.mp3 and a copy in the project folder).


-
All libraries up to date.


-
Reviewed discord.py docs (https://discordpy.readthedocs.io/en/stable/api.html#voice-related) and played around with opuslib (docs say not necessary on Windows, which I'm on) and FFmpegOpusAudio, but results were the same.


-
Referenced numerous StackOverflow threads, including every one suggested when I typed the title of this post. Tried each suggestion individually and in various combinations when possible. See a few below :
Discord.py music_cog, bot joins channel but plays no sound
Discord.py Music Bot doesn’t play music
Discord.py Bot Not Playing music


-
Double checked my sound is on and volume turned up. My sound is working I can hear the Discord notification when the bot joins the voice channel.


-
Issue is the same if I try to play a YouTube file with !yt command. Bot joins channel fine but no sound is produced.




















That's everything I can think of at the moment. There seem to be many posts regarding this exact topic and also variations of it. I've been unable to find a clear and consistent answer, nor one the works for me. I am willing to try anything at this point as it is obviously possible, but for whatever reason success eludes me. Thank you in advance for any assistance you are willing to offer.


-
-
using ffmpeg libraray to write to a mp4, ffprobe shows there are 100 frames and 100 packets, but av_interleaved_write_frame only called 50 times
2 mai 2023, par ollydbg23here is my code to generate a mp4 file by using ffmpeg and opencv library. The opencv library is only try to generate 100 images(frames), and ffmpeg library is to compress the images to a mp4 files.


Here is the working code :


#include <iostream>
#include <vector>
#include <cstring>
#include <fstream>
#include <sstream>
#include <stdexcept>
#include <opencv2></opencv2>opencv.hpp>
extern "C" {
#include <libavutil></libavutil>imgutils.h>
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
}

#include<cstdlib> // to generate time stamps

using namespace std;
using namespace cv;

int main()
{
 // Set up input frames as BGR byte arrays
 vector<mat> frames;

 int width = 640;
 int height = 480;
 int num_frames = 100;
 Scalar black(0, 0, 0);
 Scalar white(255, 255, 255);
 int font = FONT_HERSHEY_SIMPLEX;
 double font_scale = 1.0;
 int thickness = 2;

 for (int i = 0; i < num_frames; i++) {
 Mat frame = Mat::zeros(height, width, CV_8UC3);
 putText(frame, std::to_string(i), Point(width / 2 - 50, height / 2), font, font_scale, white, thickness);
 frames.push_back(frame);
 }

 // generate a serial of time stamps which is used to set the PTS value
 // suppose they are in ms unit, the time interval is between 30ms to 59ms
 vector<int> timestamps;

 for (int i = 0; i < num_frames; i++) {
 int timestamp;
 if (i == 0)
 timestamp = 0;
 else
 {
 int random = 30 + (rand() % 30);
 timestamp = timestamps[i-0] + random;
 }

 timestamps.push_back(timestamp);
 }

 // Populate frames with BGR byte arrays

 // Initialize FFmpeg
 //av_register_all();

 // Set up output file
 AVFormatContext* outFormatCtx = nullptr;
 //AVCodec* outCodec = nullptr;
 AVCodecContext* outCodecCtx = nullptr;
 //AVStream* outStream = nullptr;
 //AVPacket outPacket;

 const char* outFile = "output.mp4";
 int outWidth = frames[0].cols;
 int outHeight = frames[0].rows;
 int fps = 25;

 // Open the output file context
 avformat_alloc_output_context2(&outFormatCtx, nullptr, nullptr, outFile);
 if (!outFormatCtx) {
 cerr << "Error: Could not allocate output format context" << endl;
 return -1;
 }

 // Open the output file
 if (avio_open(&outFormatCtx->pb, outFile, AVIO_FLAG_WRITE) < 0) {
 cerr << "Error opening output file" << std::endl;
 return -1;
 }

 // Set up output codec
 const AVCodec* outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!outCodec) {
 cerr << "Error: Could not find H.264 codec" << endl;
 return -1;
 }

 outCodecCtx = avcodec_alloc_context3(outCodec);
 if (!outCodecCtx) {
 cerr << "Error: Could not allocate output codec context" << endl;
 return -1;
 }
 outCodecCtx->codec_id = AV_CODEC_ID_H264;
 outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
 outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
 outCodecCtx->width = outWidth;
 outCodecCtx->height = outHeight;
 //outCodecCtx->time_base = { 1, fps*1000 }; // 25000
 outCodecCtx->time_base = { 1, fps}; // 25000
 outCodecCtx->framerate = {fps, 1}; // 25
 outCodecCtx->bit_rate = 4000000;

 //https://github.com/leandromoreira/ffmpeg-libav-tutorial
 //We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers.
 if (outFormatCtx->oformat->flags & AVFMT_GLOBALHEADER)
 {
 outCodecCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER; //
 }

 // Open output codec
 if (avcodec_open2(outCodecCtx, outCodec, nullptr) < 0) {
 cerr << "Error: Could not open output codec" << endl;
 return -1;
 }

 // Create output stream
 AVStream* outStream = avformat_new_stream(outFormatCtx, outCodec);
 if (!outStream) {
 cerr << "Error: Could not allocate output stream" << endl;
 return -1;
 }

 // Configure output stream parameters (e.g., time base, codec parameters, etc.)
 // ...

 // Connect output stream to format context
 outStream->codecpar->codec_id = outCodecCtx->codec_id;
 outStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 outStream->codecpar->width = outCodecCtx->width;
 outStream->codecpar->height = outCodecCtx->height;
 outStream->codecpar->format = outCodecCtx->pix_fmt;
 outStream->time_base = outCodecCtx->time_base;

 int ret = avcodec_parameters_from_context(outStream->codecpar, outCodecCtx);
 if (ret < 0) {
 cerr << "Error: Could not copy codec parameters to output stream" << endl;
 return -1;
 }

 outStream->avg_frame_rate = outCodecCtx->framerate;
 //outStream->id = outFormatCtx->nb_streams++; <--- We shouldn't modify outStream->id

 ret = avformat_write_header(outFormatCtx, nullptr);
 if (ret < 0) {
 cerr << "Error: Could not write output header" << endl;
 return -1;
 }

 // Convert frames to YUV format and write to output file
 int frame_count = -1;
 for (const auto& frame : frames) {
 frame_count++;
 AVFrame* yuvFrame = av_frame_alloc();
 if (!yuvFrame) {
 cerr << "Error: Could not allocate YUV frame" << endl;
 return -1;
 }
 av_image_alloc(yuvFrame->data, yuvFrame->linesize, outWidth, outHeight, AV_PIX_FMT_YUV420P, 32);

 yuvFrame->width = outWidth;
 yuvFrame->height = outHeight;
 yuvFrame->format = AV_PIX_FMT_YUV420P;

 // Convert BGR frame to YUV format
 Mat yuvMat;
 cvtColor(frame, yuvMat, COLOR_BGR2YUV_I420);
 memcpy(yuvFrame->data[0], yuvMat.data, outWidth * outHeight);
 memcpy(yuvFrame->data[1], yuvMat.data + outWidth * outHeight, outWidth * outHeight / 4);
 memcpy(yuvFrame->data[2], yuvMat.data + outWidth * outHeight * 5 / 4, outWidth * outHeight / 4);

 // Set up output packet
 //av_init_packet(&outPacket); //error C4996: 'av_init_packet': was declared deprecated
 AVPacket* outPacket = av_packet_alloc();
 memset(outPacket, 0, sizeof(outPacket)); //Use memset instead of av_init_packet (probably unnecessary).
 //outPacket->data = nullptr;
 //outPacket->size = 0;

 // set the frame pts, do I have to set the package pts?

 // yuvFrame->pts = av_rescale_q(timestamps[frame_count]*25, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp
 yuvFrame->pts = av_rescale_q(frame_count*frame_count, outCodecCtx->time_base, outStream->time_base); //Set PTS timestamp

 // Encode frame and write to output file
 int ret = avcodec_send_frame(outCodecCtx, yuvFrame);
 if (ret < 0) {
 cerr << "Error: Could not send frame to output codec" << endl;
 return -1;
 }
 while (ret >= 0)
 {
 ret = avcodec_receive_packet(outCodecCtx, outPacket);

 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 int abc;
 abc++;
 break;
 }
 else if (ret < 0)
 {
 cerr << "Error: Could not receive packet from output codec" << endl;
 return -1;
 }

 //av_packet_rescale_ts(&outPacket, outCodecCtx->time_base, outStream->time_base);

 outPacket->stream_index = outStream->index;

 outPacket->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base); // Set packet duration

 ret = av_interleaved_write_frame(outFormatCtx, outPacket);

 static int call_write = 0;

 call_write++;
 printf("av_interleaved_write_frame %d\n", call_write);

 av_packet_unref(outPacket);
 if (ret < 0) {
 cerr << "Error: Could not write packet to output file" << endl;
 return -1;
 }
 }

 av_frame_free(&yuvFrame);
 }

 // Flush the encoder
 ret = avcodec_send_frame(outCodecCtx, nullptr);
 if (ret < 0) {
 std::cerr << "Error flushing encoder: " << std::endl;
 return -1;
 }

 while (ret >= 0) {
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return -1;
 }
 ret = avcodec_receive_packet(outCodecCtx, pkt);

 // Write the packet to the output file
 if (ret == 0)
 {
 pkt->stream_index = outStream->index;
 pkt->duration = av_rescale_q(1, outCodecCtx->time_base, outStream->time_base); // <---- Set packet duration
 ret = av_interleaved_write_frame(outFormatCtx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << std::endl;
 return -1;
 }
 }
 }


 // Write output trailer
 av_write_trailer(outFormatCtx);

 // Clean up
 avcodec_close(outCodecCtx);
 avcodec_free_context(&outCodecCtx);
 avformat_free_context(outFormatCtx);

 return 0;
}

</int></mat></cstdlib></stdexcept></sstream></fstream></cstring></vector></iostream>


Note that I have used the
ffprobe
tool(one of the tool from ffmpeg) to inspect the generated mp4 files.

I see that the mp4 file has 100 frames and 100 packets, but in my code, I have such lines :


static int call_write = 0;

 call_write++;
 printf("av_interleaved_write_frame %d\n", call_write);



I just see that the
av_interleaved_write_frame
function is only called 50 times, not the expected 100 times, anyone can explain it ?

Thanks.


BTW, from the ffmpeg document( see here : For video, it should typically contain one compressed frame ), I see that a
packet
mainly has one videoframe
, so theffprobe
's result looks correct.

Here is the command I used to inspect the mp4 file :


ffprobe -show_frames output.mp4 >> frames.txt
ffprobe -show_packets output.mp4 >> packets.txt



My testing code is derived from an answer in another question here : avformat_write_header() function call crashed when I try to save several RGB data to a output.mp4 file