
Recherche avancée
Médias (2)
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (37)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir
Sur d’autres sites (6362)
-
FFmpeg : "Invalid data found when processing input" when reading video from memory
24 avril 2020, par DrawoceansI'm trying to read a mp4 video file from memory with C++ and FFmpeg library, but I got "Invalid data found when processing input" error. Here are my codes :



#include <cstdio>
#include <fstream>
#include <filesystem>

extern "C"
{
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
}

using namespace std;
namespace fs = std::filesystem;

struct VideoBuffer
{
 uint8_t* ptr;
 size_t size;
};

static int read_packet(void* opaque, uint8_t* buf, int buf_size)
{
 VideoBuffer* vb = (VideoBuffer*)opaque;
 buf_size = FFMIN(buf_size, vb->size);

 if (!buf_size) {
 return AVERROR_EOF;
 }

 printf("ptr:%p size:%zu\n", vb->ptr, vb->size);

 memcpy(buf, vb->ptr, buf_size);
 vb->ptr += buf_size;
 vb->size -= buf_size;

 return buf_size;
}

void print_ffmpeg_error(int ret)
{
 char* err_str = new char[256];
 av_strerror(ret, err_str, 256);
 printf("%s\n", err_str);
 delete[] err_str;
}

int main()
{
 fs::path video_path = "test.mp4";
 ifstream video_file;
 video_file.open(video_path);
 if (!video_file) {
 abort();
 }
 size_t video_size = fs::file_size(video_path);
 char* video_ptr = new char[video_size];
 video_file.read(video_ptr, video_size);
 video_file.close();

 VideoBuffer vb;
 vb.ptr = (uint8_t*)video_ptr;
 vb.size = video_size;

 AVIOContext* avio = nullptr;
 uint8_t* avio_buffer = nullptr;
 size_t avio_buffer_size = 4096;
 avio_buffer = (uint8_t*)av_malloc(avio_buffer_size);
 if (!avio_buffer) {
 abort();
 }

 avio = avio_alloc_context(avio_buffer, avio_buffer_size, 0, &vb, read_packet, nullptr, nullptr);

 AVFormatContext* fmt_ctx = avformat_alloc_context();
 if (!fmt_ctx) {
 abort();
 }
 fmt_ctx->pb = avio;

 int ret = 0;
 ret = avformat_open_input(&fmt_ctx, nullptr, nullptr, nullptr);
 if (ret < 0) {
 print_ffmpeg_error(ret);
 }

 avformat_close_input(&fmt_ctx);
 av_freep(&avio->buffer);
 av_freep(&avio);
 delete[] video_ptr;
 return 0;
}
</filesystem></fstream></cstdio>



And here is what I got :



ptr:000001E10CEA0070 size:4773617
ptr:000001E10CEA1070 size:4769521
...
ptr:000001E10D32D070 size:1777
[mov,mp4,m4a,3gp,3g2,mj2 @ 000001e10caaeac0] moov atom not found
Invalid data found when processing input




FFmpeg version is 4.2.2, with Windows 10 and Visual Studio 2019 in x64 Debug mode. FFmpeg library is the Windows compiled shared library from FFmpeg homepage. Some codes are from official example
avio_reading.c
. Target MP4 file can be played normally by VLC player so I think the file is OK. Is anywhere wrong in my codes ? Or is it an FFmpeg library problem ?

-
FFMPEG:av_rescale_q - time_base difference
2 décembre 2020, par Michael IVI want to know once and for all, how time base calucaltion and rescaling works in FFMPEG. 
Before getting to this question I did some research and found many controversial answers, which make it even more confusing.
So based on official FFMPEG examples one has to





rescale output packet timestamp values from codec to stream timebase





with something like this :



pkt->pts = av_rescale_q_rnd(pkt->pts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->dts = av_rescale_q_rnd(pkt->dts, *time_base, st->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
pkt->duration = av_rescale_q(pkt->duration, *time_base, st->time_base);




But in this question a guy was asking similar question to mine, and he gave more examples, each of them doing it differently. And contrary to the answer which says that all those ways are fine, for me only the following approach works :



frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);




In my application I am generating video packets (h264) at 60 fps outside FFMPEG API then write them into mp4 container.



I set explicitly :



video_st->time_base = {1,60};
video_st->r_frame_rate = {60,1};
video_st->codec->time_base = {1 ,60};




The first weird thing I see happens right after I have written header for the output format context :



AVDictionary *opts = nullptr;
int ret = avformat_write_header(mOutputFormatContext, &opts);
av_dict_free(&opts);




After that ,
video_st->time_base
is populated with :


num = 1;
den = 15360




And I fail to understand why.



I want someone please to exaplain me that.Next, before writing frame I calculate
PTS for the packet. In my case PTS = DTS as I don't use B-frames at all.



And I have to do this :



const int64_t duration = av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
 totalPTS += duration; //totalPTS is global variable
 packet->pts = totalPTS ;
 packet->dts = totalPTS ;
 av_write_frame(mOutputFormatContext, mpacket);




I don't get it,why codec and stream have different time_base values even though I explicitly set those to be the same. And because I see across all the examples that
av_rescale_q
is always used to calculate duration I really want someone to explain this point.


Additionally, as a comparison, and for the sake of experiment, I decided to try writing stream for WEBM container. So I don't use libav output stream at all.
I just grab the same packet I use to encode MP4 and write it manually into EBML stream. In this case I calculate duration like this :



const int64_t duration =
 ( video_st->codec->time_base.num / video_st->codec->time_base.den) * 1000;




Multiplication by 1000 is required for WEBM as the time stamps are presented in milliseconds in that container.And this works. So why in case of MP4 stream encoding there is a difference in time_base which has to be rescaled ?


-
FFmpeg starting manually but not with Systemd on boot
23 juin 2021, par eKrajnakOn Raspberry Pi 4 B 4GB with official Debian 10 image, I have /home/pi/run.sh script with following :


#!/bin/bash
ffmpeg -nostdin -framerate 15 -video_size 1280x720 -input_format yuyv422 -i /dev/video0 -f alsa -i hw:Device \
 -af acompressor=threshold=-14dB:ratio=9:attack=10:release=1000 -c:a aac -ac 2 -ar 48000 -ab 160k \
 -c:v libx264 -pix_fmt yuv420p -b:v 3M -bf 1 -g 20 -flags +ilme+ildct -preset ultrafast \
 -streamid 0:0x101 -streamid 1:0x100 -mpegts_pmt_start_pid 4096 -mpegts_start_pid 0x259 -metadata:s:a:0 language="" -mpegts_service_id 131 -mpegts_transport_stream_id 9217 -metadata provider_name="Doesnt matter" -metadata service_name="Doesnt matter" \
 -minrate 3500 -maxrate 3500k -bufsize 4500k -muxrate 4000k -f mpegts "udp://@239.1.67.13:1234?pkt_size=1316&bitrate=4000000&dscp=34" -loglevel debug < /dev/null > /tmp/ff3.log 2>&1



Script is starting from console without problems. It takes audio from USB sound card and video from USB camera and creates UDP stream to IPTV. Then I created Systemd service :


[Unit]
Description=Streamer
After=multi-user.target sound.target network.target

[Service]
ExecStart=/home/pi/run.sh
KillMode=control-group
Restart=on-failure
TimeoutSec=1

[Install]
WantedBy=multi-user.target
Alias=streaming.service



After restarting Raspberry, script has started, but FFmpeg hangs on error failures in log :


cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:0 (257) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
cur_dts is invalid st:1 (256) [init:1 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)



and will not start streaming to UDP target. But, if I manually login to SSH and issue systemctl stop streaming and then systemctl start streaming Ffmpeg starts successfully. What's different with service auto-start on boot ?


Setting the "sleep timeout" at script begginging will not help. However, removing audio stream from FFmpeg config looks to solve auto-start on boot.