
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (111)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (6134)
-
playing video in android using javacv ffmpeg
20 février 2017, par d91I’m trying to play a video stored in sd card using javacv. Following is my code
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
playthevideo();
}
protected void playthevideo() {
String imageInSD = "/storage/080A-0063/dama/" + "test3.mp4";
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(imageInSD);
AndroidFrameConverter converterToBitmap = new AndroidFrameConverter();
OpenCVFrameConverter.ToIplImage converterToipi = new OpenCVFrameConverter.ToIplImage();
try {
Log.d("Tag", "try");
grabber.start();
Log.d("Tag", "started");
int i = 0;
IplImage grabbedImage = null;
ImageView mimg = (ImageView) findViewById(R.id.a);
grabber.setFrameRate(grabber.getFrameRate());
ArrayList<bitmap> bitmapArray = new ArrayList<bitmap>();
while (((grabbedImage = converterToipi.convert(grabber.grabImage())) != null)) {
Log.d("Tag", String.valueOf(i));
int width = grabbedImage.width();
int height = grabbedImage.height();
if (grabbedImage == null) {
Log.d("Tag", "error");
}
IplImage container = IplImage.create(width, height, IPL_DEPTH_8U, 4);
cvCvtColor(grabbedImage, container, CV_BGR2RGBA);
Bitmap bitmapnew = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmapnew.copyPixelsFromBuffer(container.getByteBuffer());
if (bitmapnew == null) {
Log.d("Tag", "bit error");
}
mimg.setImageBitmap(bitmapnew);
mimg.requestFocus();
i++;
}
Log.d("Tag", "go");
}
catch(Exception e) {
}
}
}
</bitmap></bitmap>just ignore the tags because those are only for my testing purposes..
When I run this code the main activity layout is still loading while android monitor shows the value of "i" (which is current frame number) and suddenly after frame number 3671 code exits the while loop and theimageview
shows a frame which is not the end frame of that video(it somewhere around staring of the video).
I was unable to find a way to show the grabbed frames from ffmpegframegrabber so i decided to show the image in imageview in this way. Can anybody tell me why I’m getting this error or another error non path to play and show the video in android activity ?
BTW javacv 1.3.1 is imported correctly into my android dev environment. Thanks. -
How to encode 3840 nb_samples to a codec that asks for 1024 using ffmpeg
26 juillet 2018, par GabulitFFmpeg has an example muxing code on https://ffmpeg.org/doxygen/4.0/muxing_8c-example.html
This code generates frame by frame video and audio. What I am trying to do is to change
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, nb_samples);to
ost->tmp_frame = alloc_audio_frame(AV_SAMPLE_FMT_S16, c->channel_layout,
c->sample_rate, 3840);so that it generates 3840 samples per channel instead of 1024 samples which is the default for nb_samples (aac codec).
I tried to combine code from https://ffmpeg.org/doxygen/4.0/transcode_aac_8c-example.html which has an example on buffering the frames.
My resulting program crashes when generating audio samples after a couple of frames when assigning *q++ a new value at the first iteration :
/* Prepare a 16 bit dummy audio frame of 'frame_size' samples and
* 'nb_channels' channels. */
static AVFrame *get_audio_frame(OutputStream *ost)
{
AVFrame *frame = ost->tmp_frame;
int j, i, v;
int16_t *q = (int16_t*)frame->data[0];
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, ost->enc->time_base,
STREAM_DURATION, (AVRational){ 1, 1 }) >= 0)
return NULL;
for (j = 0; j nb_samples; j++) {
v = (int)(sin(ost->t) * 10000);
for (i = 0; i < ost->enc->channels; i++)
*q++ = v;
ost->t += ost->tincr;
ost->tincr += ost->tincr2;
}
frame->pts = ost->next_pts;
ost->next_pts += frame->nb_samples;
return frame;
}Maybe I don’t get the logic behind encoding.
Here is the full source that i’ve come up with :
The reason i am trying to accomplish this task is that I have a capture card sdk that outputs 2 channel 16 bit raw pcm 48000Hz which has 3840 samples per channel and I am trying to encode its output to aac. So basically if I get the muxing example to work with 3840 nb_samples this will help me understand the concept.
I have already looked at How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024 but the example uses "encodeFrame", which the examples on ffmpeg documentation doesn’t use or I am mistaken.
Any help is greatly appreciated.
-
ffmpeg configuration difficulty with filter_complex and hls
4 février 2020, par akc42I am trying to set up ffmpeg so that it will record from a microphone and encode the results at the same time into a .flac file for later syncing up with some video I will be making.
The microphone is plugged into a raspberry pi (4B) and I am currently trying it with a blue yeti mic, but I think I can do the same with a focusrite scarlett 2i2 plugged in instead. However I was puzzling about how to start the server recording and decided I could do it from a web browser if I made a simple nodejs server that spawned ffmpeg as a child process.
But then I was inspired by this sample ffmpeg command which displays (on my desktop with an graphical interface) a volume meter
ffmpeg -hide_banner -i 'http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_30fps_normal.mp4' -filter_complex "showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2" -c:v libx264 -c:a aac -f mpegts - | ffplay -window_title "Peak Volume" -i -
What if I could stream the video produced by the
showvolume
filter to the web browser that I am using to control the ffmpeg process (NOTE I don’t want to send the audio with this). So I tried to read up on hls (since the control device will be an ipad - in fact that is what I will record the video on), and came up with this commandffmpeg -hide_banner -f alsa -ac 2 -ar 48k -i hw:CARD=Microphone -filter_complex "asplit=2[main][vol],[vol]showvolume=rate=25:f=0.95:o=v:m=p:dm=3:h=80:w=480:ds=log:s=2[vid]" -map [main] -c:a:0 flac recordings/session_$(date +%a_%d_%b_%Y___%H_%M_%S).flac -map [vid] -preset veryfast -g 25 -an -sc_threshold 0 -c:v:1 libx264 -b:v:1 2000k -maxrate:v:1 2200k -bufsize:v:3000k -f hls -hls_time 4 -hls_flags independent_segments delete_segments -strftime 1 -hls_segment_filename recordings/volume-%Y%m%d-%s.ts recordings/volume.m3u8
The problem is I am finding the documentation a bit opaque as to what happens once I have generated two streams - the main audio and a video stream, and this command throws both a warning and an error :-
The warning is
Guessed Channel Layout for Input Stream #0.0 : stereo
and the error is
[NULL @ 0x1baa130] Unable to find a suitable output format for 'hls'
hls: Invalid argumentWhat I am trying to do is set up stream labels [main] and [vol] as I split the incoming audio into two parts, then I pass [vol] through the "showvolume" filter and end up with stream [vid].
I think I need to then use
-map
to specify encoding the [main] stream down to flac and writing it out to file (The file exists after I run the command although they have zero length), and use another -map to pass through to the-f hls
section. But I think I have something wrong at this stage.Can someone help me get this command right.