
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (107)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Problèmes fréquents
10 mars 2010, parPHP et safe_mode activé
Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (13022)
-
"Convert regions extracted from wav audio file into Flac audio file " python using FFMPAG in Django
9 avril 2019, par maryam mehboobRegions extracted from wav audio file have "Invalid duration specification for ss"
For Example in my case duration is in this format[(0.006000000000000005, 1.03), (2.0540000000000003, 4.870000000000003)]
This is for converting regions into Flac format.
class FLACConverter(object):
# pylint: disable=too-few-public-methods
"""
Class for converting a region of an input audio or video file into a FLAC audio file
"""
def __init__(self, source_path, include_before=0.25, include_after=0.25):
self.source_path = source_path
self.include_before = include_before
self.include_after = include_after
def __call__(self, region):
try:
print("regions to convert in flac:{}".format(region))
start = region
end = region
# start = max(0, start - self.include_before)
start = list(map(lambda x: tuple(max(0, y - self.include_before) for y in x), start))
end = list(map(lambda x: tuple(y + self.include_after for y in x), end))
temp = tempfile.NamedTemporaryFile(suffix='.flac', delete=False)
command = ["ffmpeg", "-ss", str(start), "-t", str([tuple(x-y for x, y in zip(x1, x2)) for (x1, x2) in zip(end, start)]),
"-y", "-i", self.source_path,
"-loglevel", "error", temp.name]
use_shell = True if os.name == "nt" else False
subprocess.check_output(command)
print(temp.name)
#subprocess.check_output(command, stdin=open(os.devnull), shell=use_shell)
read_data = temp.read()
temp.close()
os.unlink(temp.name)
print("read_data :{}".format(read_data))
print("temp :{}".format(temp.name))
return read_data
except KeyboardInterrupt:
return NoneI expect the output of
/var/folders/p1/6ttydjfx2sq9zl4cnmjxgjh40000gp/T/tmpwz5n4fnv.flac
but it returns an errorCalledProcessError at /
Command '['ffmpeg', '-ss', '[(0.006000000000000005, 1.03), (2.0540000000000003, 4.870000000000003)]', '-t', '[(0.5, 0.5), (0.5, 0.5)]', '-y', '-i', '/var/folders/p1/6ttydjfx2sq9zl4cnmjxgjh40000gp/T/tmpuyi5spat.wav', '-loglevel', 'error', '/var/folders/p1/6ttydjfx2sq9zl4cnmjxgjh40000gp/T/tmp0vdewoyd.flac']' returned non-zero exit status 1.```
-
FFMPEG is not recording the whole window
25 mai 2021, par JohnI'm using FFMPEG to record a window as illustrated in the figure below. In all cases, the right hand side of the recorded window is cropped. The command I'm giving is



ffmpeg -f gdigrab -i title="example.txt - Notepad++" output.mkv




Any suggestion on how to fix this problem is much appreciated.






Here are some additional info :



- 

- Running Windows 10
- Using ffmpeg-20181215-011c911-win64-static, but I have the same issue using other versions of FFMPEG
- Desktop resolution is 3200x1800 (dpi scaling issue ?)









This is what the recorded area looks like in the example above






Update 1 :



Recording the whole desktop works fine, however, when recording a region using x and y offsets, the region captured is correct, but the region indicated is wrong. I illustrate this in the image below that shows a screen capture of the desktop during recording. The background image is a grid and the taskbar has been hidden.






The size of the area to capture is specified to 1280x720, but the region indicated is 1600x900. Also, the x offset is specified to 400px, but the region indicted starts at 500px.



The area recorded is correct ! The image below shows a screenshot of the recording during playback in vlc, note that the "misplaced" region indicator can be seen






Update 2 :



I noticed that the cursor, the mouse, is not correctly placed when capturing from the desktop, see recording below. Everything looks fine during recording, but at playback the cursor is misplaced.






The command issued for the recording above was :



ffmpeg -f gdigrab -framerate 30 -offset_x 1820 -offset_y 100 -video_size 1280x720 -i desktop output5.mkv




Windows 10 / ffmpeg-20181215-011c911-win64-static


-
FFmpeg overwrite an AVPacket
26 janvier 2019, par Alejandro RamírezHello guys I am working with video. I wan to encrypt the I-frames of one video compressed in
H.264
. So I get theAVPacket
from the video and compare if(AVPacket.flags & AV_PKT_FLAG_KEY)
to know if the packet has an I frame, but when I try to print theAVPacket.data
I don’t have any information to encrypt.
Where can I get the information regarding toI-frame
. ahead a put my code, thank you.#include <iostream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavutil></libavutil>avutil.h>
}
#define INBUF_SIZE 4096
//#define AV_INPUT_BUFFER_PADDING_SIZE 32
int main (int argc, char * argv[])
{
//av_register_all();//omit
//revisar el número de argumentos en el video
AVFormatContext *pFormatCtx = NULL;
AVCodec *dec = NULL;
AVCodecContext *pCodecCtx = NULL;
AVStream *st = NULL;
AVDictionary *opts = NULL;
AVFrame *frame;
AVPacket avpkt;
uint8_t inbuf [INBUF_SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
FILE *f;
int frame_count;
int video_stream_index = -1;
//av_init_packet (&avpkt);
memset (inbuf + INBUF_SIZE, 0, AV_INPUT_BUFFER_PADDING_SIZE);
if (avformat_open_input (&pFormatCtx, argv[1], NULL, NULL) != 0)
return -1;
if (avformat_find_stream_info (pFormatCtx, NULL) < 0)
return -1;
//video_stream_index = av_find_best_stream (pFormatCtx,AVMEDIA_TYPE_VIDEO, -1, -1, &dec,0);
video_stream_index = av_find_best_stream (pFormatCtx,AVMEDIA_TYPE_VIDEO, -1, -1, &dec,0);
if ( video_stream_index < 0)
return -1;
std::cout << "video_stream; " << video_stream_index << "\n";
st = pFormatCtx -> streams [video_stream_index];
std::cout << "number of frames " << st -> nb_frames << "\n";
std::cout << "event_flags " << st -> event_flags << "\n";
//pCodecCtx = st -> codec;//deprecated
dec = avcodec_find_decoder (st -> codecpar -> codec_id);
std::cout << "codec_id: " << st -> codecpar -> codec_id << "\n";
std::cout << "AV_CODEC_ID_H264: " << AV_CODEC_ID_H264 << "\n";
if (!dec)
return -1;
pCodecCtx = avcodec_alloc_context3 (dec);
if (!pCodecCtx)
return -1;
//av_dict_set (&opts, "refcounted_frames", "0", 0);
avcodec_parameters_to_context (pCodecCtx, st -> codecpar);
std::cout << "todo bien \n";
if (avcodec_open2 (pCodecCtx, dec, &opts) < 0)
return -1;
/*************hasta aqui buen codigo*********************************************/
frame = av_frame_alloc ();
if (!frame)
return -1;
av_init_packet (&avpkt);
avpkt.data = NULL;
avpkt.size = 0;
f = fopen (argv[1], "r");
int times = 1;
while (av_read_frame (pFormatCtx, &avpkt) >= 0){
AVPacket oripkt = avpkt;
if (oripkt.stream_index == video_stream_index){
if (oripkt.flags & AV_PKT_FLAG_KEY){
std::cout << "times: " << times ++ << "\n";
std::cout << "avpkt.flags: " << oripkt.flags << "\n";
std::cout << "tam avpkt.data: " << sizeof(oripkt.data) << "\n";
std::cout << "tam avpkt.data: " << oripkt.data << "\n";
std::cout << "oripkt.size: " << oripkt.size << "\n";
std::cout << "oripkt.side_date_eme: " << oripkt.side_data_elems << "\n";
if (!oripkt.data)
std::cout << "no tengo dinero \n";
}
}
}
std::cout << "Fin del programa " << "\n";
}
</iostream>