TEST FOR ADDING AND SUBTRACTING NEGITIVE AND POSITIVE NUMBERS
adding subtracting multiplying dividing integers,Adding, subtracting, multiplying and dividing Integer worksheets,free worksheets on fraction reduce to lowest terms and changing fractions to higher terms on a sixth grade level,free advanced algebra add subtract rational expressions
Thank you for visiting our site! You landed on this page because you entered a search term similar to this: test for adding and subtracting negitive and positive numbers, here's the result:
File Name Node(s) Covered Send or Receive
villin-all.png All nodes (4) All Transmits and Receives
villin-all-node-2.png Node 2 All Transmits and Receives
villin-rx.png All Nodes Receive (rx)
villin-rx-node-0.png Node 0 Receive (rx)
villin-rxs.png All Nodes Receive (rxs)
villin-tx.png All Nodes Transmit (tx)
villin-txs.png All Nodes Transmit (txs)

February 17, 2004

February 10, 2004

    I finished a rough draft of the poster outline:

February 6, 2004

    OpenOffice's Maximum work space is a 46.85"x46.85" space, and we need a 48"x72" space.
    So I cleaned up the netcapacity stuff, and introduced a flag that makes the server stream the data continously to put a known load on the switch. With this I was able to quantify the load by looking at netpipe: I placed the source in CVS under /cluster/generic/bin/netload/
    I figured that some good graphs would help in parsing the switch data, so I created a script to autoparse the snmp-count.pl data, autogenerate gnuplot .plot files, autogenerate .png files for each of the gnuplot files saving all of the results. There are a few things that you will find handy to know in the need-to-know.txt file. The results can be found in the following directories:
    • Additive Results
    • Differential Results: These graphs show the number of [humm] since the last poll. Look for spikes...

February 5, 2004

    Started to look at ways to move Gromacs to MP_Lite, but tabled it until I have a large chunk of time to dedicate to it.
    I started playing with the netcapacity scripts that were developed during the Summer of 2001. After reworking the scripts (cleaning, and commenting -- some minor fixes) I have been able to understand how they work and -- relate to reach other. I have the source below:

February 3, 2004

    I upgraded Cricket to 1.0.4 from 1.0.3, and added the Cairo & Bazaar Switches to the configuration.
    I polished up the specialized switch scheduler to make is backwards compatible with the scheduler in CVS, then placed the new scheduler in CVS for future use.
    A polished version of snmp-count.pl is in /cluster/generic/src/
    A new script is in the b-and-t-gromacs source tree what creates conf files for the scheduler. Since the conf files are gaining weight, I created a script that after editing the proper parameters will generate every combination of those parameters into individual configuration files.
    I Cleaned out all of my switch testing stuff from the Database and Disk, and ran the vacuum around the directories to remove any excess file clutter that is not needed for later analysis. The scripts I used are in my directory under ~/src/shell/clean.sh and ~/src/shell/pre-clean-check.sh
    I updated the All Cluster Summary Page to add the Switches that have made it into cricket. All Cluster Summary
    The tcp_sack tests are finished and the results are below:

January 30, 2004

    Looking into netpipe
    I have netpipe setup on bazaar and cairo. I set it up to run over the range of 1K to 1GB on bazaar and the results will be graphed through gnuplot placing the results here:
    • Bazaar
    • Cairo

    I have been looking into TCP Slow Start. Sun has a document that provides an overview of the option.
    I think we should run a set of tests turning tcp_sack off [it is on my default] and see if we gain anything. In conjunction with this flag we will be disabling tcp_dsack, tcp_fack. The below is taken from here.
     The tcp_sack variable enables Selective Acknowledgements (SACK) as they are defined in RFC 2883 - An Extension to Selective Acknowledgement (SACK) Option for TCP and RFC 2883 - An Extension to Selective Acknowledgement (SACK) Option for TCP. These RFC documents contain information on an TCP option that was especially developed to handle lossy connections. If this variable is turned on, our host will set the SACK option in the TCP option field in the TCP header when it sends out a SYN packet. This tells the server we are connecting to that we are able to handle SACK. In the future, if the server knows how to handle SACK, it will then send ACK packets with the SACK option turned on. This option selectively acknowledges each segment in a TCP window. This is especially good on very lossy connections (connections that loose a lot of data in the transfer) since this makes it possible to only retransmit specific parts of the TCP window which lost data and not the whole TCP window as the old standards told us to do. This means that if a certain segment of a TCP window is not received, the receiver will not return a SACK for that segment. The sender will then know which packets where not received by the receiver, and will hence retransmit that packet. For redundancy, this option will fill up all space possibly within the option space, 40 bytes per segment. Each SACK'ed packet takes up 2 32-bit unsigned integers and hence the option space can contain 4 SACK'ed segments. However, normally the timestamp option is used in conjunction with this option. The timestamp option takes up 10 bytes of data, and hence only 3 segments may be SACK'ed in each packet in normal operation. If you know that you will be sending data over an extremely lossy connection such as a bad internet connection at one point or another, this variable is recommended to turn on. However, if you will only send data over an internal network consisting of a perfect condition 2 feet cat-5 cable and both machines being able to keep up with maximum speed without any problems, you should not need it. This option is not required, but it is definitely a good idea to have it turned on. Note that the SACK option is 100% backwards compatible, so you should not run into any problems talking to any other hosts on the internet who do not support it. The tcp_sack option takes a boolean value. This is per default set to 1, or turned on. This is generally a good idea and should cause no problems. 

January 29, 2004

    I setup ntpd on all of the nodes and admin to sync to hopper then hopper to sync to a remote server. This should keep all of the computers on the same time. I will keep watching it over the next few weeks to make sure that there is not drift.
    The results from varying the buffer sizes are here:
    For the next set of tests we want to vary the following three parameters: (Information from ip-sysctl.txt
        tcp_wmem - vector of 3 INTEGERs: min, default, max      min: Amount of memory reserved for send buffers for TCP socket.      Each TCP socket has rights to use it due to fact of its birth.      Default: 4K      default: Amount of memory allowed for send buffers for TCP socket      by default. This value overrides net.core.wmem_default used      by other protocols, it is usually lower than net.core.wmem_default.      Default: 16K      max: Maximal amount of memory allowed for automatically selected      send buffers for TCP socket. This value does not override      net.core.wmem_max, "static" selection via SO_SNDBUF does not use this.      Default: 128K    tcp_rmem - vector of 3 INTEGERs: min, default, max      min: Minimal size of receive buffer used by TCP sockets.      It is guaranteed to each TCP socket, even under moderate memory      pressure.      Default: 8K      default: default size of receive buffer used by TCP sockets.      This value overrides net.core.rmem_default used by other protocols.      Default: 87380 bytes. This value results in window of 65535 with      default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit      less for default tcp_app_win. See below about these variables.      max: maximal size of receive buffer allowed for automatically      selected receiver buffers for TCP socket. This value does not override      net.core.rmem_max, "static" selection via SO_RCVBUF does not use this.      Default: 87380*2 bytes.    tcp_mem - vector of 3 INTEGERs: min, pressure, max      low: below this number of pages TCP is not bothered about its      memory appetite.      pressure: when amount of memory allocated by TCP exceeds this number      of pages, TCP moderates its memory consumption and enters memory      pressure mode, which is exited when memory consumption falls      under "low".      high: number of pages allowed for queueing by all TCP sockets.      Defaults are calculated at boot time from amount of available      memory.    

January 28, 2004

    I have learned the one can set the MTU size via ifconfig to any size (in octets) between [60 and 9000] with the default being 1500. I have setup some tests to run up the range using the 64K tcp window size (both OS and LAM-MPI). I saw no change with this setup as illustrated by the two tables below:
    I have been watching the packet sizes going though the High Order switch on bazaar as villin, cut, and dppc work on b8 and c9, below is the table of the results.
    • Villin (Parallel Structure = 2)
      Villin Node = b8 Node = b9
      Packet Size Start # Finish # Difference Start # Finish # Difference
      64 685 3428 2743 684 2569 1885
      65 - 127 29 338237 338208 29 337944 337915
      128 - 255 9 831 822 9 97 88
      256 - 511 0 692 692 0 522 522
      512 - 1023 0 1016 1016 0 1013 1013
      1024 - 1518 0 820346 820346 0 818836 818836
    • Cut (Parallel Structure = 2)
      Cut Node = b8 Node = b9
      Packet Size Start # Finish # Difference Start # Finish # Difference
      64 354 13460 13106 357 12370 12013
      65 - 127 12 1051242 1051230 12 1050722 1050710
      128 - 255 0 13107 13107 0 12170 12170
      256 - 511 0 21413 21413 0 21037 21037
      512 - 1023 0 15 15 0 15 15
      1024 - 1518 0 2035137 2035137 0 2031424 2031424
    • Dppc (Parallel Structure = 2)
      Dppc Node = b8 Node = b9
      Packet Size Start # Finish # Difference Start # Finish # Difference
      64 227 78788 78561 223 77456 77233
      65 - 127 19 5289409 5289390 19 5288695 5288676
      128 - 255 2 3154 3152 2 586 584
      256 - 511 0 1988 1988 0 34 34
      512 - 1023 0 2029 2029 0 2028 2028
      1024 - 1518 1 10500761 10500760 1 10479620 10479619

    After writing the below, i did some testing explicitly setting the RPI to [tcp|sysv|usysv] and noticed no difference between any of them which is mostly due to the nature of our tests. 1 processor per node has a load and this is the only load given to a machine. If we were running using both CPUs per node we will see an improvement using usysv (unless the nodes have additional load that we wish to properly balance then sysv would be optimal).
    I did some testing with the following
    LAM-MPI Option Using SMP ps_node avg_cpu_time
    -ssi rpi tcp --ssi rpi_tcp_short=8K No 290 125
    -ssi rpi tcp --ssi rpi_tcp_short=8K Yes 116 309
    -ssi rpi usysv --ssi rpi_tcp_short=8K --ssi rpi_usysv_short=8K No 290 125
    -ssi rpi usysv --ssi rpi_tcp_short=8K --ssi rpi_usysv_short=8K Yes 300 120
    -ssi rpi sysv --ssi rpi_tcp_short=8K --ssi rpi_sysv_short=8K No 290 125
    -ssi rpi sysv --ssi rpi_tcp_short=8K --ssi rpi_sysv_short=8K Yes 300 120
    -ssi rpi lamd No 167 216
    -ssi rpi lamd Yes 134 269