
Recherche avancée
Autres articles (54)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (9098)
-
AsyncTask publishProgress() does not update progress ffmpeg android
7 février 2014, par jayI am using ffmpeg commands for processing media files.In doInBackground() method i have started the process and every time i get the duration , time values and grabbing progress using time and duration and send progress to publishProgress(progress).When i tested on google nexus(android 4.4 kitkat) it is updating progress dialog correctly but this won't happen in below android 4.4 devices.It is updating with an eye blink of time after completion of the process.
Here is my code :protected String doInBackground(String... params) {
// TODO Auto-generated method stub
try {
proc = mProcess.start();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
processDuration(proc.getErrorStream());
// Wait for process to exit
int exitCode = 1; // Assume error
try {
exitCode = proc.waitFor();
} catch (InterruptedException e) {
Log.e(TAG, "Process interrupted!", e);
}
onExit(exitCode);
return null;
}
private void onExit(int exitCode) {
// TODO Auto-generated method stub
Log.i("exit code >>>>>>>>..", ""+exitCode);
}
private void processDuration(InputStream errorStream) {
// TODO Auto-generated method stub
Scanner sc = new Scanner(errorStream);
// Find duration
Pattern durPattern = Pattern.compile("(?<=Duration: )[^,]*");
String dur = sc.findWithinHorizon(durPattern, 0);
if (dur==null) throw new RuntimeException("Could not parse duration.");
String[] hms = dur.split(":");
try{
totalSecs= Integer.parseInt(hms[0]) * 3600 + Integer.parseInt(hms[1]) * 60 + Double.parseDouble(hms[2]);
Log.i(" progress>>>>>>>>>>>>>",""+totalSecs);
}catch(NumberFormatException e){
}
Pattern timePattern = Pattern.compile("(?<=time=)[\\d:.]*");
String match= sc.findWithinHorizon(timePattern, 0);
while (null != (match = sc.findWithinHorizon(timePattern, 0))) {
hms = match.split(":");
try{
processedSecs= Integer.parseInt(hms[0]) * 3600 + Integer.parseInt(hms[1]) * 60 + Double.parseDouble(hms[2]);
}catch(NumberFormatException e){
}
progress = processedSecs / totalSecs;
final int finalProgress=(int)(progress*100);
try {
publishProgress(""+finalProgress);
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
publishProgress(""+100);
}
protected void onPostExecute(String result) {
super.onPostExecute(result);
mProgressDialog.dismiss();
}
protected void onPreExecute() {
super.onPreExecute();
showDialog(DIALOG_DOWNLOAD_PROGRESS);
}
protected void onProgressUpdate(String... progress) {
mProgressDialog.setProgress(Integer.parseInt(progress[0]));
super.onProgressUpdate(progress);
}
public Dialog showDialog(int id) {
// TODO Auto-generated method stub
switch (id) {
case DIALOG_DOWNLOAD_PROGRESS:
mProgressDialog = new ProgressDialog(context);
mProgressDialog.setMessage(loading process..");
mProgressDialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL);
mProgressDialog.setCancelable(false);
mProgressDialog.setMax(100);
mProgressDialog.setCanceledOnTouchOutside(false);
mProgressDialog.show();
return mProgressDialog;
default:
return null;
}
}
}Thanks for Your Help..
Please help me out this problem.......... -
H264 HW accelerated decoding in Android using stagefright library
21 février 2014, par user3215358I`m trying to decode h264 video using HW with Stagefright library.
i have used an example in here. Im getting decoded data in
MedaBuffer
. For renderingMediaBuffer->data()
i triedAwesomeLocalRenderer
in AwesomePlayer.cpp.but picture in screen are distorted
Here is The Link of original and crashed picture.
And also tried this in example`
sp<metadata> metaData = mVideoBuffer->meta_data();
int64_t timeUs = 0;
metaData->findInt64(kKeyTime, &timeUs);
native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
err = mNativeWindow->queueBuffer(mNativeWindow.get(),
mVideoBuffer->graphicBuffer().get(), -1);`
</metadata>But my native code crashes. I can`t get real picture its or corrupted or it black screen.
Please, I need Your help, What i'm doing wrong ?
Thanks in Advance.
-
How can fix CalledProcessError : in ffmpeg
7 décembre 2020, par MarioI hope someone can help to troubleshoot this problem. I'm trying to save the loss plots out of Keras in the form of the following animation.






but I have been facing the following error, and ultimately I can't save the animation :



MovieWriter stderr:
[h264_v4l2m2m @ 0x55a67176f430] Could not find a valid device
[h264_v4l2m2m @ 0x55a67176f430] can't configure encoder
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

---------------------------------------------------------------------------
BrokenPipeError Traceback (most recent call last)
~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 229 try:
--> 230 yield self
 231 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in grab_frame(self, **savefig_kwargs)
 383 self.fig.savefig(self._frame_sink(), format=self.frame_format,
--> 384 dpi=self.dpi, **savefig_kwargs)
 385 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/figure.py in savefig(self, fname, transparent, **kwargs)
 2179 
-> 2180 self.canvas.print_figure(fname, **kwargs)
 2181 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
 2081 bbox_inches_restore=_bbox_inches_restore,
-> 2082 **kwargs)
 2083 finally:

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in print_raw(self, filename_or_obj, *args, **kwargs)
 445 cbook.open_file_cm(filename_or_obj, "wb") as fh:
--> 446 fh.write(renderer._renderer.buffer_rgba())
 447 

BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

CalledProcessError Traceback (most recent call last)
 in <module>
 17 print(f'{model_type.upper()} Train Time: {Timer} sec')
 18 
---> 19 create_loss_animation(model_type, hist.history['loss'], hist.history['val_loss'], epoch)
 20 
 21 evaluate(model, trainX, trainY, testX, testY, scores_train, scores_test)

 in create_loss_animation(model_type, loss_hist, val_loss_hist, epoch)
 34 
 35 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(l1, l2, loss, val_loss, title), repeat=True, interval=1000, repeat_delay=1000)
---> 36 ani.save(f'loss_animation_{model_type}_oneDataset.mp4', writer=writer)

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in save(self, filename, writer, fps, dpi, codec, bitrate, extra_args, metadata, extra_anim, savefig_kwargs, progress_callback)
 1154 progress_callback(frame_number, total_frames)
 1155 frame_number += 1
-> 1156 writer.grab_frame(**savefig_kwargs)
 1157 
 1158 # Reconnect signal for first draw if necessary

~/anaconda3/envs/CR7/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
 97 value = type()
 98 try:
---> 99 self.gen.throw(type, value, traceback)
 100 except StopIteration as exc:
 101 # Suppress StopIteration *unless* it's the same exception that

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in saving(self, fig, outfile, dpi, *args, **kwargs)
 230 yield self
 231 finally:
--> 232 self.finish()
 233 
 234 

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in finish(self)
 365 def finish(self):
 366 '''Finish any processing for writing the movie.'''
--> 367 self.cleanup()
 368 
 369 def grab_frame(self, **savefig_kwargs):

~/anaconda3/envs/CR7/lib/python3.6/site-packages/matplotlib/animation.py in cleanup(self)
 409 if self._proc.returncode:
 410 raise subprocess.CalledProcessError(
--> 411 self._proc.returncode, self._proc.args, out, err)
 412 
 413 @classmethod

CalledProcessError: Command '['/usr/bin/ffmpeg', '-f', 'rawvideo', '-vcodec', 'rawvideo', '-s', '720x720', '-pix_fmt', 
'rgba', '-r', '5', '-loglevel', 'error', '-i', 'pipe:', '-vcodec', 'h264', '-pix_fmt', 'yuv420p', '-b', '800k', '-y', 
'loss_animation_CNN_oneDataset.mp4']' returned non-zero exit status 1.
</module>



I tried to ignore the error by this answer but it seems it's not the case. I also checked similar case but its answer for getting a static git binary is not my cas as well since not especial converting PNG to MP4 !



My code is as follows :



plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'

def animate(i, data1, data2, line1, line2):
 temp1 = data1.iloc[:int(i+1)]
 temp2 = data2.iloc[:int(i+1)]

 line1.set_data(temp1.index, temp1.value)
 line2.set_data(temp2.index, temp2.value)

 return (line1, line2)


def create_loss_animation(model_type, data1, data2):
 fig = plt.figure()
 plt.title(f'Loss on Train & Test', fontsize=25)
 plt.xlabel('Epoch', fontsize=20)
 plt.ylabel('Loss MSE for Sx-Sy & Sxy', fontsize=20)
 plt.xlim(min(data1.index.min(), data2.index.min()), max(data1.index.max(), data2.index.max()))
 plt.ylim(min(data1.value.min(), data2.value.min()), max(data1.value.max(), data2.value.max()))

 l1, = plt.plot([], [], 'o-', label='Train Loss', color='b', markevery=[-1])
 l2, = plt.plot([], [], 'o-', label='Test Loss', color='r', markevery=[-1])
 plt.legend(loc='center right', fontsize='xx-large')

 Writer = animation.writers['ffmpeg']
 writer = Writer(fps=5, bitrate=1800)

 ani = matplotlib.animation.FuncAnimation(fig, animate, fargs=(data1, data2, l1, l2), repeat=True, interval=1000, repeat_delay=1000)
 ani.save(f'{model_type}.mp4', writer=writer)

# create datasets
x = np.linspace(0,150,50)
y1 = 41*np.exp(-x/20)
y2 = 35*np.exp(-x/50)

my_data_number_1 = pd.DataFrame({'x':x, 'value':y1}).set_index('x')
my_data_number_2 = pd.DataFrame({'x':x, 'value':y2}).set_index('x')

create_loss_animation('test', my_data_number_1, my_data_number_2)