20 July 2014

Get S.M.A.R.T.

As followers of this blog may know, I've been having a cacophony of hardware problems lately. Most of them revolve around that one inevitability of packing more and more data into tinier and tinier spaces: Hard disk corruption. I've been busy moving my vital datas onto an older machine of mine and setting it up to host all my source code, so now is a great time to get paranoid about disk integrity.

10 July 2014

Oh no not again

So, I was doing a routine upgrade of my (very) old laptop the other day. It no longer has a working battery and doesn't quite have the power I want for modern day-to-day stuff, but it's served me very well as a SSH gateway, Subversion server and a place to keep my IRC session idling. Old laptops can make really nice servers since they're typically quiet, draw little power, and come with their own keyboard and monitor.

But I digress.

I was upgrading the packages, and one of those was a kernel update, so I rebooted for the first time in months and...

Remember how I had hard drive problems recently? Yeah.

30 June 2014

Decimating Directories

Decimating Directories

Whenever you set up some automated system that produces files, there's always that nagging fear that you'll forget about it and it will run rampant, filling up your hard drive with clutter. One good example is using motion as a security system - you want to keep the most recent video clips in case you need to refer back to them, but there's little point in keeping the oldest ones.

Keeping only the most recent n videos and deleting the rest could be problematic, because the individual files could be large. Keeping anything younger than a certain number of days is no good, because there could be a burst of activity that creates a lot of files. So we want to make a script that will trim a directory of files down to a specific size.

30 May 2014

Integrating Integrity, part 2

Last post, I made a perl script to generate MD5 checksums for me, while displaying a progress bar. Now I want to expand its functionality to generate a .md5sum file listing the md5 for everything in a given directory, or check all the files in the list to see if their actual md5 matches the 'correct' one. I will also set things up so that any checksum mismatches or other errors are reported at the end of the run so that they aren't pushed off the terminal's scrollback buffer when working with a large list of files.

02 May 2014

Integrating Integrity, part 1

One of the most important parts of any backup solution is being able to identify when files have become corrupt due to a failing disk. Ideally, we'd be able to identify impending failure before the excrement hits the rotational cooling device, but we don't always have that luxury. I intend to cover things like S.M.A.R.T. disk checks in a later post; for today, I want to address per-file integrity checking. Because the only thing worse than having no backup is having a backup of the already-corrupt data.

md5sum has been my go-to tool for this in the past. The checksums it generates are slightly better than the old CRC32 method, and it's ubiquitous. While it is important to realise it is not a cryptographically secure checksum and cannot protect against malicious tampering, it is a very effective way to check a file for damage. However, while the venerable md5sum command works perfectly fine, I really want to make my own version with a few improvements that I find myself wanting.

04 April 2014

Rescue me!

So, I had hard drive problems recently. And then when I went to check out the backup I'd made - lucky me that I even had a semi-recent backup! - the drive that held the backup was also failing and corrupting the backup data.

The moral of this story is: BACKUPS! OMG BACKUPS! BACKUP YOUR SHIT RIGHT NOW! AND THEN BACKUP THE BACKUP! which is of course common sense and you don't need me to tell you that backups are important, gee la, you're not an idiot, you're one of my super-smart well-informed readers whom I respect 100%. Still, though, ... when was that last backup you did, exactly? Can you verify the integrity of the backup? Does it cover every single file you might want to restore in the event of a disksplosion?

I guess there is one upside to this: It's got me in a blogging mood again (Finally!) so you can expect a series of posts dealing with:-

  • Preparing for the inevitable failures
  • Watching for signs of early failure
  • Creating the One True Backup System that can finally give you peace of mind

Today? Let's get the ball rolling with a small post showing you how to set up a GRML rescue system embedded in your Ubuntu or Debian install.


18 November 2013

Naming Names, Part 2

Remember how, in the last post, I invited you all to join me "next week" for part 2? What a funny joke that was! In the meantime I've been busy being sick, getting sucked back into WoW thanks to a "gift" from a "friend", and have been dealing with hilariously bad hardware failures. But I'm finally back to finish what I started, because I owe you all that much.

In Part 1, we converted a crufty old shell script of mine to its Perl 5 equivalent, and then built it up a little to be more smart about how it goes about renaming files.

In Part two, we go nuts with feature-creep. Follow me along for the ride.


30 October 2013

The iGoogles, they do nothing!

So the threat of iGoogle shutting down has been hanging over our heads for most of the year. One of the larger programming projects I wanted to do in this time would address that, and come up with something really nice that people could use as a replacement. Sadly, I haven't been working nearly as fast as I thought I could, and that project is still on the horizon. But November is almost upon us and iGoogle is going away now. What's to be done?

Why, it's time to throw something together quickly in Perl, of course.

Grabbing the Data

The support page linked above has details on how to download a backup of your iGoogle data to your computer. The iGoogle-settings.xml file you end up with looks a bit like this:-
<?xml version="1.0" encoding="utf-8"?>
<GadgetTabML version="1.0" xmlns:iGoogle="http://www.google.com/ig" xmlns="http://schemas.google.com/GadgetTabML/2008">
  <SkinPreferences zipcode="Sydney, Australia" country="au" language="en" />
  <Tab title="Webcomics" skinUrl="http://www.elitedesigns.us/goothemes/MistedForest/doc_theme.xml">
    <Layout iGoogle:spec="THREE_COL_LAYOUT_1" />
    <Section>
      <Module type="RSS" xmlns="http://www.google.com/ig">
        <UserPref name="numItems" value="6" />
        <ModulePrefs xmlUrl="http://feed43.com/order_of_the_stick_comic.xml" />
      </Module>
      <Module type="RSS" xmlns="http://www.google.com/ig">
        <UserPref name="numItems" value="6" />
        <ModulePrefs xmlUrl="http://www.girlgeniusonline.com/ggmain.rss" />
      </Module>
      <Module type="RSS" xmlns="http://www.google.com/ig">
        <UserPref name="numItems" value="8" />
        <ModulePrefs xmlUrl="http://drmcninja.com/feed/" />
      </Module>
While there are many other Module types to consider, I'm only interested in extracting the urls of the RSS feeds that I follow, and constructing some html page with the links from of those feeds. It'll be slow to fetch each individual feed, and the script will need to be run manually, but it should serve as a decent replacement.

The eXtra Messy Language

The first thing you'll notice is that the iGoogle backup is in XML. XML is great in some ways but it is dishearteningly verbose and cryptic a lot of the time. We could look up the schema Google has made and create a proper model to represent the entire dataset in our program, or we could just rip out the bits we want with XML::Twig. XML::Twig is a nice Perl 5 library that makes parsing XML relatively painless.

It looks like we'd mostly be interested in identifying the <Tab title="Blah"> elements, and then getting the <ModulePrefs xmlUrl="http://blah"> attribute out for each <Module> element (if it posesses the "RSS" type) within the tab. I could also pull out the numItems value, but for the purposes of a quick hack to replace iGoogle, I could happily see myself hard-coding the same number of items for all the feeds.

XML::Twig supports two styles of operation. The first is loading the entire document into memory in one go and you use traditional operations on elements to traverse the XML tree however you want. The second allows you to pre-define which parts of the tree interest you and provide XML::Twig with callbacks to your own code that get run on those portions. This second mode is great for large XML documents where loading the entire thing isn't feasible. Our iGoogle settings file isn't exactly large, but the callback mode is still a nice way to deal with things. Let's try it out here.

The first cut of the script looks like this:-
#!/usr/bin/perl

use warnings;
use strict;
use utf8;
binmode(STDOUT, ":utf8");
binmode(STDERR, ":utf8");
use XML::Twig;    # Package Name: libxml-twig-perl
use Data::Dumper;
use FindBin qw/$RealBin/;

my $igoogle_settings_filename = "$RealBin/iGoogle-settings.xml";
print STDERR "Opening iGoogle settings file: $igoogle_settings_filename\n";

my $tabs = {}; # Tab Name => [ urls ];
# Called by XML::Twig on each ModulePrefs part.
sub collect_feed_urls
{
   my ($twig, $element) = @_;
   # Extract the URL from this element, and the tab name from its ancestor.
   my $url = $element->att('xmlUrl');
   my $tab_name = $element->parent('Tab')->att('title');
   # Put feed urls in an array ref, grouped by tab name.
   $tabs->{$tab_name} = [] unless defined $tabs->{$tab_name};
   push @{$tabs->{$tab_name}}, $url;
}

# Create our Twig object and tell it what bits we're interested in.
my $twig = XML::Twig->new(
      twig_handlers => {
         'Tab//Module[@type="RSS"]/ModulePrefs' => \&collect_feed_urls,
      }
   );
$twig->parsefile($igoogle_settings_filename); # XML::Twig will just die if there's a problem. Fine by me.

# Print everything out to check it worked so far.
print STDERR Dumper($tabs);
There's not much to say, really - the preamble just loads the modules we want to use, and determines where the XML file is (in the same directory as the script). Then we create a variable called $tabs, a hash reference to a bunch of array references. This is where we'll store the feed URLs. The subroutine collect_feed_urls is the callback we give to XML::Twig, and it closes over that $tabs variable. This means that whenever it runs, the $tabs that it references is the one we declared in the main script, and it can shove all the data in there. I'm essentially using it as a global variable, except it's way more elegant than that really, honest.

One keyword you might be unfamiliar with in the subroutine is unless - a beautiful part of Perl which is effectively just if not, but without necessitating parentheses around the whole expression. And of course, to be Perlish, I'm using the condition at the end of the statement, since it's only one statement I want to affect. If we were writing in a more C-ish style, that one line might turn into:-
if ( ! defined $tabs->{$tab_name})
{
    $tabs->{$tab_name} = [];
}

Moving on. Creating our Twig object with XML::Twig->new() allows us to set some global options for the module, as well as defining the handlers that should be triggered on certain parts of the document. The right hand side of the handler is just a subroutine reference, and could be written in-line if you've got a series of small transformations you wish to do. In our case, it's cleaner to refer to the subroutine we defined earlier using \&.

The left hand side specifies what XML::Twig should be looking for to run our handler. It can be as simple as the XML element name, or (in this case) an XPath-like expression. The expression Tab//Module[@type="RSS"]/ModulePrefs means we are looking for a <ModulePrefs> element, and that it should be looking for one contained in a <Module> element with a 'type' attribute set to "RSS", and there can be any number of other enclosing elements as long as there's a <Tab> as an ancestor somewhere.

Finally, $twig->parsefile() tells it to go, and finally I use Data::Dumper to check that what we've got so far worked.

Really Simple except when it's not

We've got two main steps to deal with next. Firstly, we need to take each feed URL and fetch that from the Interwebs. Secondly, the document we get back from that URL should be in RSS (or "Really Simple Syndication") format. We'll want to parse out the individual items from that, and extract the title and link.

Grabbing a file from the internet over HTTP is easy enough in Perl, there's a plethora of libraries to do it for us. LWP::Simple should be more than sufficient - it exports a get(url) subroutine that returns the document text or undef on failure.

For the RSS parsing, we have a few options as well. RSS uses XML, so we could just use XML::Twig on it. However, while the iGoogle-settings.xml file's structure is known to us and won't be changing anytime soon, the kinds of RSS we might get from random webservers could be weird dialects of it or malformed or just completely bizarre. There's no way to know ahead of time, so it's better to use a library specifically made to deal with RSS content. XML::RSS is one such module. It can produce RSS files as well as consume them, but for now we're only interested in using its parse() method and then going through all the items.

The script is starting to get a little big, so it's time to think about putting more things into subroutines. I make a 'main' sub, and a 'get_feed' sub to do the job of downloading and parsing the RSS:-
sub get_feeds
{
   my ($tabs) = @_;
   # Fetch the feeds and build a hashref of feed url => XML:RSS object (or undef)
   my $rsses = {};
   foreach my $url_list (values %$tabs) {
      foreach my $url (@$url_list) {
         $rsses->{$url} = get_feed($url);
      }
   }
   return $rsses;
}

sub get_feed
{
   my ($url) = @_;
   # Use LWP::Simple to fetch the page for us - returns 'undef' on any failure.
   print STDERR "Fetching $url ...";
   my $feed_text = get($url);
   if ( ! defined $feed_text) {
      print STDERR " Failed!\n";
      return;
   }

   # Obtained a bit of text over the interwebs, but is it valid RSS?
   my $rss = XML::RSS->new;
   eval {
      $rss->parse($feed_text);
   };
   if ($@) {
      print STDERR " Bad RSS feed!\n";
      return;
   }

   print STDERR " OK\n";
   return $rss;
}
Okay, we've got the feeds downloaded and made some sense out of them. What's next? I could print the RSS items to the terminal to check, but I have plenty of confidence that XML::RSS has done its thing successfully. And there's only a few days left before iGoogle shuts down, so let's hurry things along, shall we?

Spitting it all out

My plan is to have the script print out an HTML document. This means we'll need a bit of HTML boilerplate, wrapped around the main body, which will be divided into sections based on each old iGoogle tab, which in turn will feature a number of short lists of the top N items from each RSS feed. To keep things from getting too messy, this suggests a subroutine for each visual level in the document hierarchy. The output will be pretty plain to begin with, but we can always dress it up with CSS later.

There are a few tricks that are handy when you want to embed something strange like HTML strings inside Perl source. The most obvious trick is just to get a module to write the HTML tags for you, but I don't want to find one of those just yet. One simple way to put a large block of text in is using "here documents", by choosing some special string like "EOF" to terminate a multi-line bit of text:-
sub create_html
{
   my ($tabs, $rsses) = @_;
   my $output = "";
   $output .= <<EOF;
      <html>
         <head>
            <title>maiGoogle</title>
         </head>
         <body>
EOF

   $output .= html_body($tabs, $rsses);

   $output .= <<EOF;
         </body>
      </html>
EOF
   return $output;
}
The other alternative is to use the magic quote operators Perl gives us. Normally, you'd use double quotes (") to delimit a string literal that you want variables to be interpolated into, but that can be problematic when your HTML also has quote marks in it. Escaping everything with backslashes gets ugly fast. So instead, Perl lets you choose your quote character with qq :-
sub html_rss_feed
{
   my ($url, $rss) = @_;
   my $output = "";

   # Not all the feeds might have been fetched OK, or some might not have parsed properly.
   if ( ! defined $rss) {
      # Ideally, we'd remember why, but for now just say sorry.
      $output .= qq!<div class="feed">\n!;
      $output .= qq!  <span class="title"><a href="$url">$url</a></span>\n!;
      $output .= qq!  <p class="fail">This feed failed to load, sorry.</p>\n!;
      $output .= qq!</div>\n!;
      return $output;
   }
Here, I've chosen the bang (!) to indicate the start and end of my string, but you can use pretty much anything.

Anyway, now we've got some output, does it work?


It does! Fantastic! There's just one last issue to clear up...

Enforcing Sanity

It worked right up until the point where a feed I was getting items from gave me a title with more HTML in it. Naturally, that confused the hell out of the browser and it decided everything past that item was commented out thanks to the feed's malformed HTML. What we absolutely must do when using unknown data from the web is sanity-check it a little - in this case, I'm happy to just rip out anything that looks HTML-like and leave the titles as plain as possible. We could use some regexps to do this but again you never know what crazy things are possible with a spec as large as HTML - let's use a Perl module to do it for us. HTML::Strip looks like a good candidate.

We can write a small sub to strip out any of the bad HTML:-
sub sanify
{
   my ($str) = @_;
   my $hs = HTML::Strip->new();
   $str = $hs->parse($str);
   $hs->eof();
   return $str;
}
We do this rather than make one instance of HTML::Strip and use its ->parse() method each time, because we want to give each fragment of RSS a clean slate.
Let's give this a shot, what could possibly go wrong?

Things went wrong

The good news is that HTML::Strip fixed the broken html in the title of that one feed. The bad news is that now I'm looking at things more carefully, there's a few remaining problems. Firstly, there's an encoding issue somewhere - a few characters are clearly encoded wrong. It happens, when you're sourcing data from all over the place and smooshing them together into one HTML page. iGoogle did a pretty good job of that.

Secondly, some feeds just aren't ever getting loaded - some seem to time out waiting for the feed XML to arrive. We can fix this by abandoning LWP::Simple and using its bigger brother, LWP::UserAgent. In fact, after switching to LWP::UserAgent, I find that the reason some feeds were failing was because we were getting a "403 Forbidden" error. Spoofing the User Agent String and pretending to be Firefox fixes this. In an ideal internet it shouldn't, but the internet is far from ideal and it is a necessary fix.

The character encoding thing might require more time to debug, so for now I'll add a link to the feed itself after the title, and store any error messages I get in a second hashref that we can include in any feeds that fail.


Edit: It was HTML::Strip. It either didn't like the unicode characters, or was over-zealous in stripping things. I don't know. Changing sanify() to a simpler implementation fixed the problem.
sub sanify
{
   my ($str) = @_;
   $str //= "";
   $str =~ s/</&lt;/g;
   $str =~ s/>/&gt;/g;
   $str =~ s/&/&amp;/g;
   return $str;
}

Here's the final script. It's kludgy in places, I still haven't figured out why some odd characters are present in some feeds, and the output is completely unstyled. But it works, and with iGoogle shutting down I want to get this done and out there now.
#!/usr/bin/perl

use warnings;
use strict;
use utf8::all;    # Package Name: libutf8-all-perl
use XML::Twig;    # Package Name: libxml-twig-perl
use XML::RSS;     # Package Name: libxml-rss-perl
use LWP::UserAgent;
use FindBin qw/$RealBin/;

my $igoogle_settings_filename = "$RealBin/iGoogle-settings.xml";
my $number_of_rss_items_per_feed = 8;
print STDERR "Opening iGoogle settings file: $igoogle_settings_filename\n";

my $tabs = {}; # Tab Name => [ urls ];
# Called by XML::Twig on each ModulePrefs part.
sub collect_feed_urls
{
   my ($twig, $element) = @_;
   # Extract the URL from this element, and the tab name from its ancestor.
   my $url = $element->att('xmlUrl');
   my $tab_name = $element->parent('Tab')->att('title');
   # Put feed urls in an array ref, grouped by tab name.
   $tabs->{$tab_name} = [] unless defined $tabs->{$tab_name};
   push @{$tabs->{$tab_name}}, $url;
}

# Keep a quick record of any error messages that might help debug why a feed failed, for inclusion in the HTML.
my $error_messages = {};   # feed url -> string

sub main
{
   # Create our Twig object and tell it what bits we're interested in.
   my $twig = XML::Twig->new(
         twig_handlers => {
            'Tab//Module[@type="RSS"]/ModulePrefs' => \&collect_feed_urls,
         }
      );
   $twig->parsefile($igoogle_settings_filename); # XML::Twig will just die if there's a problem. Fine by me.

   # Fetch the feeds.
   my $rsses = get_feeds($tabs);

   # Print the HTML.
   print create_html($tabs, $rsses);
}


sub get_feeds
{
   my ($tabs) = @_;
   # Fetch the feeds and build a hashref of feed url => XML:RSS object (or undef)
   my $rsses = {};
   foreach my $url_list (values %$tabs) {
      foreach my $url (@$url_list) {
         $rsses->{$url} = get_feed($url);
      }
   }
   return $rsses;
}

sub get_feed
{
   my ($url) = @_;
   # Use LWP::UserAgent to fetch the page for us, and decode it from whatever text encoding it uses.
   print STDERR "Fetching $url ...";
   my $ua = LWP::UserAgent->new;
   $ua->timeout(30);    # A timeout of 30 seconds seems reasonable.
   $ua->env_proxy;      # Read proxy settings from environment variables.
   # One step that became necessary: LIE TO THE WEBSITES. Without this, Questionable Content
   # and some other sites instantly give us a 403 Forbidden when we try to get their RSS.
   # I know that Jeph Jacques, author of QC, did have some problem a while ago with some
   # Android app that was "Stealing" his content by grabbing the image directly and not the
   # advertising that supports his site. That's fine, but user-agent filters harm the web.
   # I'm just trying to get a feed of comic updates, here - I don't care if there's no in-line
   # image right there in the feed, all I need is a simple list of links to QC's archive
   # pages. That way Jeph gets his ad revenue, and his site is easy to check for new updates.
   # Sigh. Stuff like this, and sites that require Javascript to display simple images and text
   # are a blight on the internet.
   $ua->agent("Mozilla/5.0 Firefox/25.0");

   my $response = $ua->get($url);
   unless ($response->is_success) {
      print STDERR " Failed! ", $response->status_line, "\n";
      $error_messages->{$url} = "Feed download failed: " . $response->status_line;
      return;
   }
   my $feed_text = $response->decoded_content;

   # Obtained a bit of text over the interwebs, but is it valid RSS?
   my $rss = XML::RSS->new;
   eval {
      $rss->parse($feed_text);
   };
   if ($@) {
      print STDERR " Bad RSS feed!\n";
      $error_messages->{$url} = "RSS parsing failed: $@";
      return;
   }

   print STDERR " OK\n";
   return $rss;
}


sub create_html
{
   my ($tabs, $rsses) = @_;
   my $output = "";
   # My HTML syntax-highlighter hates the <meta> line for some reason, but only in the full script.
   # It's getting confused about just what it's being asked to highlight, perhaps.
   # Oh well! TMTOWTDI.
   my $content = "text/html;charset=utf-8";
   $output .= qq{<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">\n};
   $output .= qq!<html>\n!;
   $output .= qq!  <head>\n!;
   $output .= qq!    <meta http-equiv="content-type" content="$content" />\n!;
   $output .= qq!    <title>maiGoogle</title>\n!;
   $output .= qq!  </head>\n!;
   $output .= qq!  <body>\n!;

   $output .= html_body($tabs, $rsses);

   $output .= qq!  </body>\n!;
   $output .= qq!</html>\n!;
   return $output;
}

sub html_body
{
   my ($tabs, $rsses) = @_;
   my $output = "";
   # For each Tab, print a header and the feeds. Tabs are sorted in alphabetical order, why not.
   foreach my $tabname (sort keys %$tabs) {
      $output .= html_tab($tabname, $tabs->{$tabname}, $rsses);
   }
   return $output;
}

sub html_tab
{
   my ($tabname, $url_list, $rsses) = @_;
   my $output = "";
   $output .= qq!\n<h1>$tabname</h1>\n!;
   foreach my $url (@$url_list) {
      my $rss = $rsses->{$url};
      $output .= html_rss_feed($url, $rss);
   }
   return $output;
}

sub sanify
{
   my ($str) = @_;
   $str //= "";
   $str =~ s/</&lt;/g;
   $str =~ s/>/&gt;/g;
   $str =~ s/&/&amp;/g;
   return $str;
}

sub html_rss_feed
{
   my ($url, $rss) = @_;
   my $output = "";

   # Not all the feeds might have been fetched OK, or some might not have parsed properly.
   if ( ! defined $rss) {
      # Do we know what went wrong?
      my $error = $error_messages->{$url} // "The feed failed to load for mysterious reasons, sorry.";
      sanify($error);
      $output .= qq!<div class="feed">\n!;
      $output .= qq!  <span class="title"><a href="$url">$url</a></span>\n!;
      $output .= qq!  <p class="fail">$error</p>\n!;
      $output .= qq!</div>\n!;
      return $output;
   }

   # Feed seems to have loaded OK.

   # Figure out what the title of this feed is; default to the URL.
   my $title = $rss->channel('title') // $url;
   $title = sanify($title);
   # Where should clicking the title of the feed box link to?
   my $title_link = $rss->channel('link') // $url;
   $title_link = sanify($title_link);

   # Show them the top few items.

   $output .= qq!<div class="feed">\n!;
   $output .= qq!  <span class="title"><a href="$title_link">$title</a></span>\n!;
   $output .= qq!  <span class="rsslink"><a href="$url">rss</a></span>\n!;
   $output .= qq!  <ul>\n!;

   my @items = @{$rss->{items}};
   # Remove elements from the list, starting at N, to the end. This leaves elements 0..(N-1).
   splice @items, $number_of_rss_items_per_feed;

   # Show them as list items.
   foreach my $item (@items) {
      $output .= qq!    <li> !;
      $output .= html_rss_item($item);
      $output .= qq! </li>\n!;
   }
   $output .= qq!  </ul>\n!;
   $output .= qq!</div>\n!;

   return $output;
}

sub html_rss_item
{
   my ($item) = @_;
   # Return an html link to this RSS feed item.
   my $title = sanify($item->{title});
   my $link = sanify($item->{link});
   return qq!<a href="$link">$title</a>!;
}

main();
You can run this as-is by invoking e.g.
./maiGoogle.pl >maiGoogle.html
or put it in your crontab to run hourly or whatever. It wouldn't work well as a CGI script because it's going to have to check all of those feeds before it gets around to returning a page to you, every time.

I'm not going to promise further refinements in weekly installments, because we all know how that goes. If future posts happen, they happen; if not, tweak it yourself!