Quantcast
Channel: Cadence Digital Implementation Blogs
Viewing all 347 articles
Browse latest View live

Five-Minute Tutorial: Understanding the Encounter Power System (EPS) Reports Directory

$
0
0
No matter how you run your power analysis - with Encounter Power System (EPS) or from within Encounter Digital Implementation (EDI) System - you're probably familiar with the result directory. It will look something like VDD_125C_avg_1 and have lots of files inside. The first ones you probably look at are the "results" text file and the ir_limit.gif (at least those are the first ones I look at). While these will give you the immediate information you're looking for regarding the analysis (IR-drop, EM, etc.), several releases ago there appeared a Reports directory which gathered a lot of other information to make your life easier!

Even if your IR-drop looks good, you may still have a few unconnected instances, or be missing a powergrid view, or may have had a problem with the power consumption run that the rail analysis uses. If you know what files to check out in this Reports directory, you can make sure that you're not missing anything. I'll use "VDD" in the filename examples, but of course it could be "VSS," "VDDCore," or whatever your rails are named. Here's my breakdown of what to check out:


  • VDD.disconnected_inst.asc - This file lists the instances in the design that are not connected to the rail being analyzed. This file MUST be reviewed. There may be cells in this file that you can ignore, such as I/O cells if the LEF files do not model the pwr/gnd busses correctly. In that case, you just need to double-check that these cells are indeed getting power in the design, and make sure that the LEF pin connectivity is the reason they end up here. Once you've done that, you can easily manipulate the file to get rid of the cells you know are false violations and make sure there is nothing else.
  • VDD.disconnected_pgv.asc - This file will list the cell types whose powergrid views are not connected to the pwr/gnd net you analyzed. This file MUST be reviewed. Some of the cells listed here may be ok - there could be a macro connected to a different power net than the one analyzed, or the same kind of LEF pin issues that are ok to ignore in the disconnected_inst file. But like the disconnected_inst file, make sure you take a look at each item and understand why it's here.
  • VDD.missing_pgv.asc - This file lists the cells in your design that do not have a powergrid view. This file MUST be reviewed. Looking at this file enabled me to catch a mistake: my design had started using a few new RAM cells, but I had not regenerated the RAM powergrid views. Once I saw the results of this file, I realized I had to generate powergrid views for the new cells and then rerun the power and rail analysis.
  • VDD.pwr_annotation.asc - This file lists the instances that do not have power consumption information for the rail you analyzed. This file MUST be reviewed. If something went very wrong with the power analysis before the rail analysis, you could catch it here. But there also may be cells listed here that you can ignore: an example would be a PLL instance, that attaches to VSS and VDDA, but you were analyzing VDD. Since this instance does not connect to VDD, the power analysis that was run before rail analysis does not have any power consumption for VDD for this cell.
  • VDD.pgv_table.asc - This file lists each cell type and what kind of powergrid view was used (port, detailed, etc.), and more importantly, if the powergrid view was NOT used. Any cell types that say "disconnected" here will be in the disconnected_pgv file.
  • VDD.unconnected_sections.asc - This file lists floating metal segments that are labeled as the pwr/gnd net you analyzed, but are not connected to the grid. Leaving these in the design may not be an issue, but it's a good idea to clean them up so that they don't mask any real issues.
  • VDD.layerbased_ir.asc - This file is a breakdown of the IR-drop by layer. If you do have an IR-drop issue, you may be able to quickly narrow it down to one layer by looking at this file. If your IR-drop passes, it's still interesting information to have.
This is not an exhaustive list of all the files in the Reports directory; just the ones that I have found to be most useful. Please let me know in the comments if there are other files here you can't live without and how you use them. I'm interested in how other designers are making use of this information.

- Kari Summers

Adding Custom Shapes and Text is New and Improved in EDI System 11

$
0
0

You may have noticed that in the Encounter Digital Implementation (EDI) System 11 the commands addCustomBox, addCustomLine and addCustomText are no longer in the documentation. These previous commands weren't cutting it when it came to the features customers wanted and were not supported by OpenAccess or database commands like dbGet. So they've been replaced in EDI 11 by the commands add_shape and add_text. I think you will like the expanded features and database support of these new commands over their predecessors. Following are some of the highlights. Note the previous addCustom* commands will still work in EDI 11 but may be disabled in the future.

add_shape

Use add_shape to add custom retangles, polygons and path segments to your design. These shapes are represented as Special Nets (snet) in the database. Use the -net option to specify the net name to associate the shape to. If -net is omitted the shape is assigned the net name _NULL to indicate it is floating. This allows you to still access the shape through database commands (i.e. dbGet).

As you can see below add_shape has significantly more features compared to addCustomBox:

add_shape [-help]
-layer {layerNameOrPointer}
-net {string}]
{-rect {x1 y1 x2 y2} | -polygon {x1 y1 x2 y2 ...xn yn} |
[-pathSeg {x1 y1 x2 y2}
-width <value>
[-beginExt <value> -endExt <value>]]}
addCustomBox layerName x1 y1 x2 y2

Here's an example of adding a rectangle for net VDD on layer M1:

     add_shape -net VDD -layer M1 -rect {1 1 25 9}

 

When adding path segments you can specify begin and end extension values. The extensions can be any positive value which is on the manufacturing grid. Also, paths are center-line based so in the example below X=2.5 is where I want the center of the wire. Here's an example of adding a vertical segment on M1:

     add_shape -net VDD -layer M1 -pathSeg {2.5 1 2.5 25} -width 3 -beginExt 0 -endExt 0

 

Combining add_shape with dbGet leads to some nice automation. For example, say I select the rectangle and path I created above. I can then create a polygon on M3 overlapping them using:

     add_shape -polygon [lindex [dbShape [dbGet selected.polyPts -i 0] OR [dbGet selected.polyPts -i 1] \

     -output polygon] 0] -layer M3 -net VDD

As I mentioned above these shapes are represented as special nets so you can use commands such as editSelect, editDelete, etc. with them just like other special nets.

Here is an example of how the above shapes are represented in DEF:

     SPECIALNETS 2 ;

     ...

     - VDD

       + POLYGON Metal3 ( 50000 18000 ) ( 8000 18000 )

         ( 8000 50000 ) ( 2000 50000 ) ( 2000 2000 ) ( 50000 2000 )

      + ROUTED Metal1 6000 ( 5000 2000 0 ) ( * 50000 0 )

       + RECT Metal1 ( 2000 2000 ) ( 50000 18000 )

       + USE POWER

      ;

     END SPECIALNETS

Another nice feature of add_shape is it will return the pointer of the new object. For example:

     encounter 42> set rectPtr [add_shape -rect {10 10 12 12} -layer M4]

     0x2aaab3eb8fd0

     encounter 43> dbGet $rectPtr.??

     beginExt: 0

     box: {10 10 12 12}

     endExt: 0

     geomType: rect

     layer: 0x1a7afa48

     net: 0x2aaab3834ef8

     objType: sWire

     polyPts: {{10 10} {12 10} {12 12} {10 12}}

     pts: 0x0

     shape: notype

     shieldNet: 0x0

     status: routed

     subClass: {}

     width: 2

Lastly, you stream out the objects to GDS just like other special nets.

     # streamOut map file:

     M1  SPNET  1  0

add_text

Use add_text to add custom text on the desired layer. You can see below add_text has significantly more features than addCustomText had:

add_text [-help]

[-alignment {centerCenter centerLeft centerRight lowerCenter lowerLeft lowerRight upperCenter upperLeft upperRight}]

 [-drafting {true false}]

 [-font {euroStyle gothic math roman script stick fixed swedish milSpec}]

 [-height <value>] -label {string} [-layer {layerNameOrPointer}]

 [-orient {MX MX90 MY90 R0 R180 R270 R90}] -pt <x> <y>

 

addCustomText  layerName "text" x1 y1 height

 

For example, to add text on layer M1:

     add_text -label "Hello World!" -pt {1 1} -layer M1

When you add text to a specific layer its visibility/selectability is controlled by both the layer and the overall text control.

 

You can also omit the layer. When you do this the text is added to the general text layer:

     add_text -label "Hello World!" -pt {1 1}

Below is the respective stream out mapping depending on whether the text is on a metal layer or the general text layer:

     # Mapping text of a specific metal layer:

     M1    TEXT  1  0

     # Mapping general text:

     text  TEXT  2  0

Note: If you specify values for font, height, orientation, or alignment it will not be reflected in the EDI System GUI. EDI will display it in the default values. But these properties are saved for Open Access interoperability with Virtuoso. So if you open the design in Virtuoso you'll see the text displayed as you specified.

Brian Wallace

 

 

 

 

 

Writing More Compact Encounter Scripts with dbGet Expressions

$
0
0

Querying the Encounter database with dbGet is typically pretty concise to begin with. But you might not be aware of its support for expression-based matching, which enables even more compact scripting.

Let's take a very simple but common scripting challenge: Finding all of the high fanout nets in the design.

Then let's take this a little further. How would we write a script to find all nets with fanout greater than "n" which are not clock nets?

Historically we'd do this by iterating through all of the nets in the design, checking whether their fanout is greater than "n", checking whether it is marked as a clock net, and if so add it to a list of nets that meet the criteria. This would take about a half-dozen lines of code.

But by using dbGet's support for expression-based matching, we can make this script more compact. Here's how it works.

First, let's look at how we'd write this script without expressions:

set high_fanout_nets {}
foreach net [dbGet top.nets] {
  if {[dbGet $net.numInputTerms] > 32} {
    if {![dbGet $net.isClock]} {
      lappend high_fanout_nets $net
    }
  }
}
return $high_fanout_nets

Here's how we'd write the same functionality using dbGet's expression-based matching:

encounter 1> dbGet top.nets {.numInputTerms > 32 && .isClock == 0}

Much more compact, right?

So here's how it works, step by step. The attribute ".numInputTerms" gives us, effectively, the fanout for each net. If we were to query that attribute for each net we'd get a list of numbers representing the fanout of each net in the design:

encounter 2> dbGet top.nets.numInputTerms
5 5 1 16 32 32 16 33 19 17 18 21 20 21 20 20 71 17 17 8 16 16 32 16

We more commonly use "simple" matching to find things we're looking for like object names, and testing for 1's and 0's. And we could certainly do that with the .numInputTerms attribute:

encounter 3> dbGet top.nets.numInputTerms 32
32 32 32 32 32 32 32 32

And if we wanted to get the pointers to those nets we could do it like this:

encounter 4> dbGet top.nets.numInputTerms 32 -p
0x2aaab4252f38 0x2aaab20acbe8 0x2aaab20acf30 0x2aaab215fc88 0x2aaab2160078 0x2aaab21faa00 0x2aaab21faca0 0x2aaab21fcab8

And if we wanted to get their names we could do it like this:

encounter 5> dbGet [dbGet top.nets.numInputTerms 32 -p].name
DTMF_INST/TDSP_CORE_INST/ALU_32_INST/n_3062 DTMF_INST/TDSP_CORE_INST/EXECUTE_INST/n_896 {DTMF_INST/TDSP_CORE_INST/EXECUTE_INST/nbus_426[0]} DTMF_INST/TDSP_CORE_INST/TDSP_CORE_GLUE_INST/n_1213 DTMF_INST/TDSP_CORE_INST/TDSP_CORE_GLUE_INST/n_1357 DTMF_INST/RESULTS_CONV_INST/n_2516 DTMF_INST/RESULTS_CONV_INST/n_2521 DTMF_INST/RESULTS_CONV_INST/n_2512

But if we want to get more sophisticated we can use dbGet expression matching. The first thing to notice is the unique syntax these expressions use. If we want to do the same as we did a couple examples back -- ie, get the pointers to all nets with a fanout of 32 -- here's how we do it with expression matching:

encounter 6> dbGet top.nets {.numInputTerms == 32}          
0x2aaab4252f38 0x2aaab20acbe8 0x2aaab20acf30 0x2aaab215fc88 0x2aaab2160078 0x2aaab21faa00 0x2aaab21faca0 0x2aaab21fcab8

Notice that we select the attribute to match on with ".numInputTerms", then the criteria ("==" in this case), then the value. And we wrap it all in "{}". Note also that dbGet automatically returns pointers in this mode and doesn't require a "-p".

From there we can make it more complex, with a greater than rather than equal:

encounter 7> dbGet top.nets {.numInputTerms > 32}           
0x2aaab3c5c928 0x2aaab3c5c9d0 0x2aaab3cf7898 0x2aaab3cf8c48 0x2aaab40e5890 0x2aaab42533d0 0x2aaab4253a60 0x2aaab44474b0 0x2aaab20ad080 0x2aaab20adc38 0x2aaab20c4180 0x2aaab20c44c8 0x2aaab21fb288 0x2aaab21fbfa8

Or combine two expressions to match the nets with fanout greater than "n" which are not marked as clock nets:

encounter 8> dbGet top.nets {.numInputTerms > 32 && .isClock == 0}
0x2aaab3c5c928 0x2aaab3c5c9d0 0x2aaab40e5890 0x2aaab42533d0 0x2aaab4253a60 0x2aaab44474b0 0x2aaab20ad080 0x2aaab20adc38 0x2aaab20c4180 0x2aaab20c44c8 0x2aaab21fb288 0x2aaab21fbfa8

If we wanted to get the names of these nets we could wrap it within another call to dbGet:

encounter 9> dbGet [dbGet top.nets {.numInputTerms > 32 && .isClock == 0}].name
test_modeI scan_enI {DTMF_INST/TDSP_CORE_INST/alu_cmd[1]} DTMF_INST/TDSP_CORE_INST/ALU_32_INST/n_1716 DTMF_INST/TDSP_CORE_INST/ALU_32_INST/n_1738 DTMF_INST/TDSP_CORE_INST/MPY_32_INST/n_268 {DTMF_INST/TDSP_CORE_INST/EXECUTE_INST/nbus_440[0]} DTMF_INST/TDSP_CORE_INST/EXECUTE_INST/n_4221 DTMF_INST/TDSP_CORE_INST/DECODE_INST/n_4475 DTMF_INST/TDSP_CORE_INST/DECODE_INST/n_181 DTMF_INST/RESULTS_CONV_INST/n_2509 DTMF_INST/RESULTS_CONV_INST/n_6011

I hope this is helpful in making your scripting within Encounter more compact. Check back next time and I'll show how to write these nets to a file, one net name per line by using redirect and a couple of tricks.

I'd love it if you subscribed to the Cadence Digital Implementation blogs for notification of new posts.

Question of the Day: Have you found cases where dbGet's expression based matching is particularly useful?

-Bob Dwyer

EDI System’s get_metric Command Makes Metrics Reporting Quick and Easy

$
0
0

In this blog post I want to highlight the command get_metric that was introduced in Encounter Digital Implementation (EDI) System 10.1 and enhanced further in version 11. Have you ever tried writing a script to extract information from the log file like run times or timing results? It becomes complicated quite fast when you're trying to capture the desired data, especially if a command is run multiple times. Also, any script is reliant on the log file format staying consistent.

The command get_metric was developed to make reporting metrics easy and straightforward. When the variable encEnableMetric is set to 1 (default in EDI 11) EDI System will automatically store metrics for a set of a predefined commands. These metrics are saved with the database (*.enc.dat/designName.metric) and made accessible from session to session using get_metric.

Here are some examples of how it is used:

get_metric by itself returns all metrics last computed:

encounter> get_metric
    34726           design.numStandardCell
    35327           design.numNet
     0               design.numFloatBlock
     4               design.numFixedBlock
     34726           design.numSingleRowCell
     0               design.numDoubleRowCell
     0               design.numMultiRowCell
     0               design.numIoInst
     0               design.numFixedIo
     0               design.numFloatIo
     156584          design.numTerm
     4.43            design.numTermPerNet
     40.547 %        design.util
     0.138           design.pinDensity
     1.10e+06 um     place.totalNetLength
     -1.036 ns       timing.setup.WNS.all
     -61.875 ns      timing.setup.TNS.all
     12273           timing.setup.numPaths.all
     100             timing.setup.numViolatingPaths.all
     0.026 ns        timing.setup.WNS.reg2reg
     ...

 

Use the -cmd option to filter metrics by command:

encounter> get_metric -cmd verifyGeometry
verifyGeometry
     3    verify.geom.cell
     0    verify.geom.samenet
     0    verify.geom.wiring
     0    verify.geom.antenna
     0    verify.geom.short
     0    verify.geom.overlap
     3    verify.geom.total

 

Use wildcards to match multiple commands:

encounter> get_metric -cmd verify*
verifyGeometry
     3    verify.geom.cell
     0    verify.geom.samenet
     0    verify.geom.wiring
     0    verify.geom.antenna
     0    verify.geom.short
     0    verify.geom.overlap
     3    verify.geom.total
verifyConnectivity -type all -error 1000 -warning 50
     0    verify.conn
verifyProcessAntenna -reportfile leon.antenna.rpt -error 1000
     0    verify.antenna

 

You can specify the metric name to report. If multiple commands have the same metric it returns the most recent one:

encounter> get_metric timing.setup.WNS.all
     -1.036 ns timing.setup.WNS.all

 

Filter by both command and metric:

encounter> get_metric timing.setup.WNS.all -cmd {optDesign* *setup*}
optDesign -preCts
     0.006 ns   timing.setup.WNS.all
optDesign -postCts
     -1.023 ns timing.setup.WNS.all
optDesign -postCts -hold
     -1.023 ns timing.setup.WNS.all
optDesign -postRoute
     -1.036 ns timing.setup.WNS.all
optDesign -postRoute -hold
     -1.036 ns timing.setup.WNS.all

 

If you want to format your own standard QoR report, the -tcl and -value options are used to read the data directly into Tcl lists/values you can write out in any style you choose. Below is an example of using the -tcl option:

encounter> get_metric design.* -tcl
{design.numStandardCell 34726} {design.numNet 35327} {design.numFloatBlock 0} {design.numFixedBlock 4} {design.numSingleRowCell 34726} {design.numDoubleRowCell 0} {design.numMultiRowCell 0} {design.numIoInst 0} {design.numFixedIo 0} {design.numFloatIo 0} {design.numTerm 156584} {design.numTermPerNet 4.43} {design.util 40.547 %} {design.pinDensity 0.138}

 

Use the -value option to return only the metric's value. Here I assign it to a variable:

encounter> set totalGeomViols [get_metric verify.geom.total -value]
  3

 

Lastly, use the following to report metrics for all the commands run:

encounter> get_metric -cmd *
placeDesign
     34726           design.numStandardCell
     35327           design.numNet
...
optDesign -preCts
     0.006 ns   timing.setup.WNS.all
     0.000 ns   timing.setup.TNS.all
...
clockDesign -specFile DATA/leon.ctstch
     306             clock.clk.numBuffer
     1978.44         clock.clk.areaBuffer
...
optDesign -postCts
     -1.023 ns       timing.setup.WNS.all
     -62.451 ns      timing.setup.TNS.all
...
optDesign -postCts -hold
     -1.023 ns       timing.setup.WNS.all
     -62.490 ns      timing.setup.TNS.all
     12273           timing.setup.numPaths.all
...
optDesign -postRoute
     -1.036 ns       timing.setup.WNS.all
     -61.835 ns      timing.setup.TNS.all
     12273           timing.setup.numPaths.all
...
optDesign -postRoute -hold
     -1.036 ns       timing.setup.WNS.all
     -61.875 ns      timing.setup.TNS.all
...
verifyGeometry
     3    verify.geom.cell
...
verifyConnectivity -type all -error 1000 -warning 50
     0    verify.conn
verifyProcessAntenna -reportfile
leon.antenna.rpt -error 1000
     0    verify.antenna

 

The metrics are stored in the .metric file with both the command and options. So it's useful to use wildcards when specifying the -cmd option.

 

The commands currently supported by get_metric are:

placeDesign
optDesign
timeDesign
clockDesign
verifyGeometry
verifyConnectivity
verifyProcessAntenna
verifyMetalDensity
report_power
addFiller
addWellTap
addEndCap

If you've haven't tried get_metric I encourage you to give a try and let us know what you think. Are there commands currently not supported which you would like to report metrics on?

Brian Wallace

Improve Your Productivity With Rapid Adoption Kits (RAKs) for Encounter Digital Implementation (EDI) System and Sign-off Flow

$
0
0

As you know, Cadence Online Support is your 24/7 site for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you've noticed new solutions, application notes, videos and other content are added daily. In this blog I want to highlight a new content type called the Rapid Adoption Kit (RAK). This new content type is a packaging of related material to demonstrate how you can improve your productivity and maximize the benefits of your tools. Here is what I like about the RAKs:

  • Go Deep - RAKs provide detailed information and training on a focused topic.
  • Get Your Hands Dirty - Each RAK comes with a workshop database so you can learn-by-doing.
  • You're in Control - The workshop labs, presentations and videos are all downloadable so you can go through them at your own pace.

For example, if you want to be an expert with the Encounter Digital Implementation (EDI) System then knowing DBTcl is essential. The "Database Access with DBTCL" RAK provides an overview presentation of the DBTcl commands and hands-on labs to advance your skills quickly.

Or perhaps you are an expert EDI System user but wanting to learn more about a new technology like Clock Concurrent Optimization (CCOpt). There's a RAK for that too!

Cadence EDI System customers can access the following RAKs for Encounter Digital Implementation (EDI) System and Sign-off Flow:

 

 Rapid Adoption Kit

Workshop Database Including Instructional Document

Instructional Document

Appnotes / Videos

Overview (slides)

Publish Date

Clock Concurrent Optimization (CCOpt) 

Download (12MB)

View

None

View

June 14, 2012

Database Access with DBTCL 

Download (90MB)

View

None

View

June 14, 2012

Post Assembly Closure (PAC) Flow 

Download (112MB)

View

None

None

June 25, 2012

Prototyping Foundation Flat Flow 

Download (77MB)

View Lab1
View Lab2

View Appnote
View Video

View

June 27, 2012

Prototyping Foundation Top to Bottom Flow 

Download (77MB)

View Lab1
View Lab2

View Appnote
View Video

View

June 27, 2012

Encounter Low-Power Design Flow: CPF Implementation 

Download (9MB)

View

None

None

June 27, 2012

MMMC SignOff ECO using EDI System and ETS 

Download (21MB)

View

None

View

June 27, 2012

 

These RAKs as well as RAKs for other products are easily available by going to support.cadence.com and selecting Resources -> Rapid Adoption Kits.

RAKs were developed in response to customer feedback so we'd love to hear what you think. Also, what topics do you want to see new RAKs created for? Send us your feedback by adding a comment below or using the Feedback box on Cadence Online Support.

Happy learning!

Brian Wallace

Capturing and Processing Encounter Console Output with "redirect"

$
0
0

In my last post I wrote about writing more compact db access scripts with dbGet's expression-based matching. We found all of the high fanout nets in the design which weren't clock nets:

dbGet [dbGet top.nets {.numInputTerms > 16 && .isClock == 0}].name

This writes the name of each net to the console. But how would we write those nets to a file? Say, for example, if we wanted to call optDesign -selectedNets on them? optDesign expects a file with a list of net names, one per line. How could we create that file? We could use the redirect command along with some clever scripting. More on that in a moment, but let's talk about redirect first.

The Basics

The redirect command as described here is available starting in release EDI release 11.1 (EDI stands for Encounter Digital Implementation System).

In its simplest form redirect takes the output that's written to the Encounter console and writes it to a file. This is useful when you want to process that output programatically and do something based on what's found in that file.

Take for example the checkPinAssignment command. It writes output like this:

encounter 1> checkPinAssignment

Start Checking pins of top cell: [top]
**WARN: (ENCPTN-562):   Pin [in] is [PLACED] at location (   0.330,   50.120 4 ) is NOT ON THE DESIGN BOUNDARY.
Summary report for top level: [top]
        Total Pads                         : 0
        Total Pins                         : 3
        Legally Assigned Pins              : 2
        Illegally Assigned Pins            : 0
        Unplaced Pins                      : 0
        Constant/Spl Net Pins              : 0
        Internal Pins                      : 1
        Legally Assigned Feedthrough Pins  : 0
        Illegally Assigned Feedthrough Pins: 0
End of Summary report
End Checking pins of top cell: [top]

The command doesn't return anything (ie, if you "set x checkPinAssignment", "x" wouldn't have a value) so if we want to query whether checkPinAssignment found any violations -or- check which specific pins were in violation we could do it by redirecting to a file and parsing the output. It's not necessarily pretty but sometimes you've got to do it. Here's how it works:

encounter 1> redirect checkPinAssignment.out checkPinAssignment
**Info: stdout is directed to file 'checkPinAssignment.out'...

Now, you could iterate through each line in checkPinAssignment.out and regexp for "WARN", find a pin name and operate on it in some way.

If the command you want to execute contains options, embed the options in curly brackets:

encounter 1> redirect queryPlaceDensity.out {queryPlaceDensity -userSpecified}
**Info: stdout is directed to file 'queryPlaceDensity.out'...

Intermediate 

We've talked before about writing to and reading from a file. I've got a couple more tips that extend on that idea and make things more compact.

First, check out the dbForEachFileLine command. It essentially does the same thing as:

set infile [open checkPinAssignment.out "r"]
while {[gets $infile line] >= 0} {
  if {[regexp WARN $line]} {
    Puts "$line"
  }
}
close $infile

If you're like me and have trouble remembering the infile/open/while/gets/close business try dbForEachFileLine instead:

dbForEachFileLine checkPinAssignment.out line {
  if {[regexp WARN $line]} {
    Puts "$line"
  }
}

But we can get even fancier with this approach, and avoid writing to a file altogether with redirect's -variable option:

redirect x checkPinAssignment -variable
set lines [split $x \n]
foreach line $lines {
  if {[regexp WARN $line]} {
    Puts "$line"
  }
}

Extra Credit

Back to the original puzzler I mentioned previously. How could we write to a file, one net per line, all of the nets with fanout greater than "n" so we could pass it to a command like "optDesign -selectNets <fileName>"? It's a little tricky, and probably exceedingly compound to be honest, but there is a way:

encounter 18> redirect high_fanout.nets {puts "[join [split [dbGet [dbGet top.nets {.numInputTerms > 16 && .isClock == 0}].name]] \n]"}   
**Info: stdout is directed to file 'high_fanout.nets'...

encounter 19> more high_fanout.nets
scan_enI
DTMF_INST/TDSP_CORE_INST/MPY_32_INST/n_7295
DTMF_INST/TDSP_CORE_INST/MPY_32_INST/n_6735
{DTMF_INST/TDSP_CORE_INST/MPY_32_INST/ab_a[15]}
{DTMF_INST/TDSP_CORE_INST/MPY_32_INST/ab_b[15]}

Pretty cool!

Here's the full usage for the redirect command:

encounter 1> help redirect

Usage: redirect [-help] [<file_or_var_name>] [<command>] [-tee] [-stdin | -append | -variable ] [-stderr | -stdin ]

-help                        # Prints out the command usage
<file_or_var_name>           # file name or variable name, for outputs to go
                             # (string, optional)
<command>                    # command to execute (string, optional)
-append                      # append stdout/stderr to file or variable
                             # (bool, optional)
-stderr                      # redirect stderr also (bool, optional)
-stdin                       # redirect stdin (bool, optional)
-tee                         # redirect output to both screen and file or
                             # variable (bool, optional)
-variable                    # redirect to a Tcl variable (bool, optional)

I hope there's a useful nugget or two in here.

Question of the Day: Any tips or tricks you'd like to share about reading from/writing to a file, capturing output as a variable, or redirecting it to a file? 

-Bob Dwyer

10 Encounter Tips and Tricks You May Not Be Aware Of

$
0
0

In looking over the shoulders of Encounter users over the years I've found there's a bunch of little tips and tricks I use to make interacting with the tool a little easier that aren't necessarily immediately obvious. Here are some of the more common ones I used this week:

  1. When navigating an Encounter log file in a text editor, search forward for "<CMD>". Each time a command is executed it's embedded in the log file, for example: "<CMD> optDesign -preCTS". This makes it easier to understand what was executed and what resulted. 
  2. Use "getLogFileName" to determine which log file goes with which Encounter session.
  3. Use "win" at the Encounter prompt to raise the window of the session associated with the Encounter console. Doesn't work with all window managers but when it does I find this quite useful.
  4. Use "dbWireCleanup" to delete all of the signal routes in the design (trialRoute or NanoRoute). See also "editDelete -type Signal".
  5. Use "deleteAllPowerPreroutes" to delete all of the power routing in a design. Sure you could "editDelete -type Special" but what fun is that?
  6. Use "encounter -init design.enc" to restore a design from the UNIX prompt.
  7. Load designs with "source design.enc" at the Encounter prompt instead of using restoreDesign. That way you don't need to use a pulldown menu and you don't need to tell the tool the top cell name since it's embedded within the .enc file.
  8. Use emacs-style shortcuts like "ctrl-a", "ctrl-e", and "ctrl-k" to position the cursor and strike text at the Encounter console.
  9. Use "dbGet selected.??" to query physical information about an object.
  10. Use "report_property [get_nets <net_name>]" to query timing information about an object.

Question of the Day: What little tips and tricks have you found useful in Encounter over the years? Let us know in the comments.

-Bob Dwyer

How To: Bring Up Encounter "man" Pages from a UNIX Prompt

$
0
0

Okay, this one is too cool not to share.

The other day a customer and I were trying to understand a tool behavior better so we did what we all do in desperate times: We read the documentation.

As straightforward as "reading the documentation" would seem, I bet no two users of the system interact with documentation the same way. Some people like to bring up "cdnshelp" at the UNIX prompt. Some like to download the .pdfs from the install tree. Others like to use CommandGetIt through a web browser. Some click the "Help" button within the tool. Others like to read them through http://support.cadence.com

For most things, I use "help <command_name>" at the Encounter prompt. And "man <command_name>" for an extended description of what each option does along with some examples.

So when the designer I was working with saw me launching Encounter to bring up the man page for a command, he asked if there is a way to do that from the UNIX prompt without launching the tool. I didn't know how to do that, but there is a way. Here's how...

The short answer is to add the path to the Encounter man pages to your MANPATH environment variable.

The man pages reside in the Encounter installation tree under <install_path>/share/fe/man - for example:

UNIX> which encounter
/icd/flow/EDI/EDI111/latest.USR1.lnx86/lnx86/bin/encounter

UNIX> setenv MANPATH /icd/flow/EDI/EDI111/latest.USR1.lnx86/lnx86/share/fe/man

UNIX> man placeDesign

This should bring up the man page for placeDesign without needing to launch Encounter.

However, you probably don't want to lose access to regular UNIX commands, so you'll want to append the Encounter path to your existing MANPATH. How to do this depends on the shell you use but this works for me:

setenv MANPATH ${MANPATH}:/icd/flow/EDI/EDI111/latest.USR1.lnx86/lnx86/share/fe/man

Want to get even fancier? If you've already got the encounter executable in your path you can make use of the "cds_root" utility to set your MANPATH. This way your MANPATH will always update to the latest version of Encounter you're pointing to. Slick!

setenv MANPATH ${MANPATH}:`cds_root encounter`/share/fe/man

Another approach would be to create an alias to a separate man command for Encounter:

UNIX> alias eman 'man -M `cds_root encounter`/share/fe/man'
UNIX> eman placeDesign

Hope this is helpful! And thanks to my colleagues Tim, Benton, and Bupendra for pointing out how to do this.

-Bob Dwyer


In Case You Missed It – The Most Popular EDI System Knowledge Content Published in Recent Months

$
0
0

I mentioned in my first blog one of my roles in customer support is to identify and author knowledge content for Cadence Online Support (http://support.cadence.com). In this blog post I want to highlight some of the popular Encounter Design Implementation (EDI) System content published in recent months.

If you're not receiving email notifications on the latest Cadence Online Support content, log in to http://support.cadence.com and My Support -> My Account then click on the Notification Preferences tab. Here you can specify the type of content you want to be notified of and the frequency.

There were several new videos and application notes posted recently. AOCV is definitely a hot topic lately:

Additionally, below are the solutions published in recent months which customers viewed the most. I hope you enjoy these highlights!

-  Brian

Adding decap cells near clock buffers or flip-flops

Provides a script to place decoupling capacitors (decap cells) near clock tree buffers or even flip-flops. These cells can have a high current draw so inserting decap cells minimizes the current draw effects.

Script to convert config file to global variable file for init_design in EDI 11

The Design Import flow in EDI 11 uses global variables in place of the config file. This solution provides a script for converting the EDI 10.1 config file to a global variables file. It also updates your environment to MMMC if needed.

Step-by-step through a top-down hierarchical design flow

An easy to follow description of the steps to import a large design with blackboxes and go through a top-down hierarchical partitioning in EDI System.

Using generateVias to create optimal vias for NanoRoute

Shows how to simplify your via definitions in your technology LEF using the generateVias command.

What is the encounter.logv file written out by EDI 11?

Answers the common question, "What is this encounter.logv file?".

How to Search-and-Repair after Post-Route Timing Closure

Simple flow to delete and re-route nets Verify Geometry reports violations on.

assembleDesign does not support one step assembly with nested partitions until EDI 11

Clarifies support for one step design assembly with nested partitions using assembleDesign

Does optDesign fix max transition violations on nets which have set_case_analysis constraint?

Explains optDesign will fix max transition violations even if the net is constant.

Can I update Tech LEF on the fly in Encounter? No.

Explains what LEF data can be loaded incrementally.

How do you balance skew between clocks using CTS?

Shows how to use ClkGroup in the CTS constraints file to balance skew between clocks.

Simple Steps to Debug DRC Violations Undetected in EDI System

$
0
0

You've placed and routed your design in the Encounter Digital Implementation (EDI) System. It passed Verify Geometry and Verify Connectivity without a violation. Great!

But when you run DRC signoff with your physical verification tool, you have violations related to the routing. What should you do now?

Depending on your situation there are usually two solutions:

1. Fix the violations by hand. This is okay if there are a small number and you don't need to go back to physical design.

2. Debug why these violations were not caught during physical design. This is done when you want to understand the cause of the miscorrelation. Is it a bug in one of the tools? Is a LEF rule wrong? Is the DRC rule deck wrong?

In this blog I'll explain my approach to debugging the discrepancy. If you have a different approach, please share it by adding a comment below. Here are the details of my approach:

  1. Load the DRC violation report into EDI System
  2. Use Violation Browser to see details of violation
  3. Verify violation is valid by reviewing Design Rule Manual
  4. Determine if corresponding LEF rule is correct
  5. Review other causes such as Verify Geometry option settings

First I read the violation report from the DRC checker into EDI System. EDI System supports the major DRC tool formats. In my run I've taken advantage of the Physical Verification System (PVS) interface from EDI System and invoked it straight from the EDI System GUI (PVS - Run DRC).

To read in the results to the violation browser I select Tools - Violation Browser. Then I on the Load Violation Report form, specify the report file, ts format and click OK.

Here I see there is a violation on an M2 metal. If I go back to the Violation Browser and click on the actual rule, the Description field gives the reason for the violation. In this case it says the METAL2 to METAL2 spacing required is >= 0.15.

Zoom into the violation by clicking on it under the Location section of the Violation Browser.

Below is the example violation with rulers added to show the spacing and widths:

 

If you have the Design Rule Manual (DRM) you can double check the rule to confirm the DRC check is correct. If the DRC deck is in development or for a newer technology ,it's possible it's wrong, but typically it is correct since this is for signoff.

The next step is to determine which LEF rule should enforce this spacing and verify it is defined correctly. This is not always easy, especially as more and more rules are required for smaller geometries, but typically you can narrow it down by knowing the layer, type of rule and value expected. And if the LEF has comments to label which LEF rules correspond to the DRM rules, then it's even easier.

If we go to the spacing rules for M2 we have a SPACING table which specifies required spacing based on the object width and parallel run length. This rule is likely the one which should be enforcing the spacing.

LAYER Metal2

...

  SPACINGTABLE

    PARALLELRUNLENGTH      0 0.32 0.75  1.5  2.5  3.5

    WIDTH             0 X.XX X.XX X.XX X.XX X.XX X.XX

    WIDTH          0.11 X.XX 0.15 X.XX X.XX X.XX X.XX

    WIDTH          0.75 X.XX X.XX X.XX X.XX X.XX X.XX

    WIDTH           1.5 X.XX X.XX X.XX X.XX X.XX X.XX

    WIDTH           2.5 X.XX X.XX X.XX X.XX X.XX X.XX

    WIDTH           3.5 X.XX X.XX X.XX X.XX X.XX X.XX ;

The rule seems correct ,stating that for a width 0.11 and parallel run length of 0.32 the spacing should be 0.15. But looking closely at the LEF syntax I find objects have to be greater than the specified width to trigger the rule:

PARALLELRUNLENGTH {length} ...

  {WIDTH width {spacing} ...}                       

Specifies the maximum parallel run length between two objects, in microns. If the maximum width of the two objects is greater than width, and the parallel run length is greater than length, then the spacing between the objects must be greater than or equal to spacing. The first spacing value is the minimum spacing for a given width, even if the PRL value is not met.

So in this case the LEF rule is incorrect and must be adjusted. After I change 0.11 to 0.10 in the LEF rule above then Verify Geometry catches the error shown in the browser below and the GUI:

 

An incorrect LEF rule definition is just one possible cause. Other possible reasons are listed in solution 11728790 on Cadence Online Support:

Lastly, always check with your foundry for the latest technology LEF. They often provide technology LEF files optimized and tested for EDI System.

And if you ever need help with debugging we're here to help. Contact us by opening a Service Request by going to http://support.cadence.com and selecting Service Requests - Create Service Request.

Thanks,

Brian Wallace

Five-Minute Tutorial: Why You Should Be Running Early DRC

$
0
0
Everyone knows you have to run signoff DRC before you tape out a design. Sometimes, DRC is left to exactly that moment - right before the tapeout. If major problems are found in the design at that point, the tapeout either has to be delayed, or there is a mad scramble to fix the issues. This is a situation no one wants to find themselves in.

Running DRC early and often is very much worth the effort. In addition to the general benefits of having the DRC deck and flow set up early so that it's pushbutton later in the project (there are often switches for metal stack, RDL type, bump pitch, etc that need to be selected), I recommend running DRC at the following milestones:

Power grid in place, no cell placement or signal routing. Hard macros/IP may be placed.
Running DRC here will make sure that your power grid is DRC clean. It can be very costly to have to fix power grid issues around signal routing late in the flow. Also, a majority of DRC issues can occur in power grid vias. Getting this clean early will put you ahead of the game. If your memories/IP/other macros are already placed, you'll be verifying the power connections to them as well, and you'll also make sure they are placed far enough apart from each other.

All cells placed (including macros/IP and filler cells), but no signal routing. 
Checking DRC at this point is critical. You need to make sure you have all required welltaps and/or endcaps, and that they are spaced appropriately. Having to fix this later in the flow will mean moving functional cells and affecting your timing. You also want to make sure that any IP or memory blocks have the appropriate spacing between each other and to standard cells. Alignment marker cells are required in many processes, so you'll want to check that you've got that right as well. Don't forget to add standard cell fillers before this run! If you leave out the fillers, you'll have a ton of DRC violations to wade through! You also want to make sure your fill methodology follows any VT spacing rules, and the no-filler1 rule if that exists for your process. (For more info on that particular issue, see Five-Minute Tutorial: Avoiding The Use Of FILL1 Cells.)

As soon as you have the first cut of the design with all cells placed and all signal routing complete.
This can be a big one too. If this is a new process, you may be vetting the LEF routing rules. In a tested process, the routing will usually be ok, but you may have routing access issues to memories or IP that may be using outdated blockage methods. I see a lot of this with memories where the blockage does not go to the edge of the block (the recommended method) so that NanoRoute can access the pins in a planar fashion. You may need to edit the macro blockages, and you'll want to do this at the beginning of the project when you're not in so much of a rush.

After the first pass of metal fill.
I often see DRC violations regarding metal fill spacing from the edge and corners of the design. If you're using a signoff fill utility, you probably don't need to worry about this. But most of the time, metal fill is done in Encounter Digital Implementation System (EDI) so that the timing effects can be seen easily. In that case, you may need to add some routing blockage or adjust your fill settings to get DRC-clean fill. It's also important to make sure you're hitting the metal density targets. A special note: be on the lookout for MAX density violations! It can and does happen, and it's harder to fix than min density.

If you've done all this, then you can continue intermediate DRC checks as you near tapeout and you shouldn't have that much to fix in the final days of your design.

EDI users will be familiar with the verifyGeometry command. I highly recommend running this before you start any of the DRC runs listed above! The verifyGeometry function is not a complete signoff check, but it will point out almost all of your metal DRC issues (and also shorts). Signoff DRC is still needed to check layers below M1 (since EDI uses LEF and not full layouts). But if you proceed to DRC without running verifyGeometry, you're not saving yourself any time. Fix what you can in EDI with verifyGeometry first; then you'll be in very good shape for early DRC!
You may also be interested in this post: Simple Steps To Debug DRC Violations Undetected In EDI System.

- Kari Summers

Transitioning Your LEF-Based EDI System Design Flow to OpenAccess

$
0
0

The trend of combining analog and digital circuits on a single chip has been growing for several years. More recently I'm seeing more and more designers improve their productivity by transitioning their designs to Open Access (OA) and taking advantage of the interoperability between Virtuoso and the Encounter Digital Implementation (EDI) System.  Whether you're performing floorplanning in Virtuoso (schematic-driven flow) or EDI System (netlist driven flow), OA allows you take advantage of interoperability features such as seamlessly defining and passing routing constraints. The Mixed Signal Interoperability Guide (Cadence Online Support account required) is the resource I turn to frequently when I have questions on the mixed-signal flow. Formal training is also available through the Analog-on-Top Mixed-Signal Implementation class.

In this blog I want to focus on data preparation and highlight the steps involved to create a common PDK to be used by Virtuoso and EDI System. This involves translating the LEF files to OA, then reconciling the differences between your base PDK and the OA database created from the LEF. Once these differences are resolved,  I explain how to load the design into EDI System references the OA libraries.

Converting LEF to OA

The first step is to create a LEF-compatible PDK which Virtuoso and EDI System can use. This is typically defined in 1 of 2 ways:

1. A single PDK containing the base PDK information plus the LEF rules and vias required for physical design.
  • A single PDK is not flexible. For example, if you have to add a custom via or routing constraint you must modify this main PDK which may cause problems for other users.
2. Define the LEF rules and vias in an Incremental Technology Database (ITDB) which references the base PDK.
  • An ITDB is more flexible because updates can be made directly to the ITDB without effecting the base PDK. You can also define multiple ITDBs which reference the same base PDK.

The ITDB is typically created by converting the technology LEF to OA using the lefin command found in the Virtuoso installation. To create an ITDB from tech.lef referencing the basePDKLib you would run:

lefin -lef tech.lef -lib techLib -refLib basePDKLib

The LEF files defining the standard cells, hard macros and IO cells are then converted to OA using lefin:

lefin -lef stdcells.lef -lib macroLib -refLib techLib
lefin -lef memories.lef -lib macroLib
lefin -lef io.lef -lib macroLib

After the LEF files are transferred to OA run verilogAnnotate to indicate the bit order for busses:

verilogAnnotate -refLibs macroLib -verilog macros.v -refViews layout

Reconciling the ITDB with the Base PDK

Often the LEF technology data and the base PDK are not consistent and you must reconcile their differences. For example, layer names, units or manufacturing grid may differ. For more details on creating a LEF-compatible PDK see the application note Open Access Reference Library Import on Cadence Online Support.

Another useful way to debug differences is compare your original technology LEF to the LEF generated from the OA tech file you've created. You can use the write_lef_library command to compare these LEFs. write_lef_library  writes out LEF syntax in a consistent order for easy comparison using the diff command. Below is an example flow to compare the LEF files. See the next section for details on specifying the variables to import an OA based library.

Validation of LEF versus OA

Generate a LEF based on your original tech LEF:

# setup Tcl variables for reading the LEF based design
init_design
write_lef_library from_lef.lef
exit

Generate a LEF based on the OA technology file:

# setup Tcl variables for OA based library using the same Verilog
# and timing libraries
init_design
write_lef_library from_oa.lef
exit

Now run diff to compare the LEF files and investigate the differences:

diff from_lef.lef from_oa.lef

Look for things such as rules and vias defined in one file but not the other. Also, review rules which are the same but have different values specified.

Reading in the Design

After all the LEF libraries are converted you are ready to read in the design. This can be done by reading in the OA libraries + Verilog or read the libraries and design from OA.

Reading in OA Libraries and Verilog Netlist

Following are the variables to set to read in a design using OA libraries and a Verilog netlist:

# Library variables:
set init_oa_ref_lib {techLib macroLib}
set init_layout_view {layout}
set init_abstract_view {abstract}
# Design variables:
set init_verilog {netlist.v}
set init_design_settop 0
set init_top_cell {top}
# Set other init_design variables for global vars, timing, etc.
init_design
# Save the design to OA using saveDesign {lib cell view}
saveDesign -cellview {designLib top preplace}

Reading in OA Libraries and OA Design

Following are the variables to set to read in a design using OA libraries and OA design:

# Library variables:
set init_oa_ref_lib {techLib macroLib}
set init_layout_view {layout}
set init_abstract_view {abstract}
# Design variables:
set init_design_netlisttype {OA}
set init_oa_design_lib {designLib}
set init_oa_design_cell {top}
set init_oa_design_view {layout}
# Set other init_design variables for global vars, timing, etc.
init_design
# Save the design to OA using saveDesign {lib cell view}
saveDesign -cellview {designLib top preplace}

And there you go. I hope this overview helps you understand the data preparation steps involved creating a LEF-compatible PDK and encourages you to utilize OA to take advantage of Virtuoso and EDI System's interoperability. Be sure to leverage the resources I reference above to help your transition go smoothly.

Thanks,
Brian Wallace

The Case for the Tiny Testcase

$
0
0
I often joke with customers that, although I realize they have to work on large designs, I do my best work on designs with just 2 or 3 instances. That's because I'm often trying to replicate an issue they've observed on their design and I'm attempting to reproduce that behavior in a smaller circuit.

I've found tiny testcases to be extremely efficient ways to gain quick clarity on tool behaviors which can then be more effectively applied to the real design. But it's not just having a tiny testcase that's most useful. It's the act of creating the small testcase where most insight is gained.

But it's not always easy to create a tiny testcase. Do you know how to write a syntactically correct gate-level Verilog netlist from scratch with a text editor? Do you know how to contrive complicated timing scenarios? Do you know to modify .lib/LEFs to replicate the things likely present in a design in a smaller setting? These are non-trivial things for someone who works in many different aspects of a design flow as designers are tasked with understanding an ever-increasing breadth of tools.

I recall a quote from a colleague of mine, Thad McCracken (interview here), a few years ago. We were working on a benchmark and were having a hard time reproducing some of the issues we were observing in a small testcase. He said "If we can't reproduce it in a small testcase it tells me we fundamentally don't understand the problem well enough."

Isn't that the truth of it? With the exception of run-time/memory related issues, nearly every issue we run into can be replicated in a tiny representative testcase -- but doing so can be very difficult.

Customers often joke with me that the oldest tricks in the EDA Applications Engineering books are (1) to ask if you're using the latest tool version and (2) ask for a testcase. Sure it's easy to get a self-contained testcase with Encounter (see the link for saveTestcase at the end of this post). But a better question to ask is, "Do you have any hunches on the likely circumstances causing the issue?"

Tiny testcases can be powerful in the user community as well. Some of the most disciplined and effective project teams and CAD groups I've worked with relentlessly stress the tool at the onset of a project in ways they're going to need it to perform during crunch time.

Sure, some things only come to light in the context of real designs. But creating tiny testcases can efficiently flush out the gaps between project requirements and tool requirements to give everyone a chance to find ways to get the software to do what it needs to do to meet project requirements.

In our busy schedules we often feel like we don't have time to create small testcases to triage a situation. And indeed, sometimes we just need to send a testcase in to R&D for resolution. But next time you run into an issue I'd encourage you to consider whether a tiny testcase might shed more light on the situation.

To borrow a line from Paul Cunningham: "Sometimes we need to slow down to speed up."
Bob Dwyer

Related Reading:

SPICE Correlation Made Easy by Encounter Timing System (ETS)

$
0
0
Hello, and welcome to my first blog!

As an application engineer in customer support, I have received quite a few queries on how to do SPICE correlation of timing numbers. This blog is intended to help users understand the flow/methodology for doing SPICE correlation of static timing analysis (STA) timing results using Encounter Timing System (ETS).

As we know, users do correlation of the critical paths in timing analysis with path simulation, using SPICE to gain the signoff confidence of their design. ETS offers built-in critical path simulation for base delay and signal integrity (SI) correlation with SPICE.

This blog describes the flow/methodology available in ETS at a higher level to perform path simulations with SPICE and correlate them with base delay timing.

SPICE Deck Generation

The ‘create_spice_deck’ command is available in ETS to generate the SPICE trace for a path.

The SPICE deck generated by ‘create_spice_deck’ includes:

- All nets in the path and their instance connections

- Standard cell gate information for the instances and their port connections - initial conditions and voltage sources

- Measure statements for slew and delay measurements

- RC parasitic network information

Various options of create_spice_deck command can be used to specify the path(s) of interest and other information required for SPICE deck.

For details on supported options to this command, visit ETS documentation

Examples for SPICE deck generation command

1) The following command without any options will generate a SPICE for worst path as seen by timing analysis

create_spice_deck

2) The following command creates a SPICE deck for specified path with predriver waveform as input PWL and side path loading of 1 stage, and includes the path of specified SPICE subcircuit and model file in SPICE deck.

create_spice_deck -report_timing {-retime path_slew_propagation -net -from_rise inst_flop1/q -though inst_buf/a -though inst_buf/y -to inst_flop2/d} -input_waveform predriver -subckt_file SPICE_subckt.sp -model_file models.sp -power {vdd vddw} -ground {vss vssw} -side_path_level 1 -outdir ETS_SPICE

3) The following command creates a SPICE deck and simulates it using the SpectreTM simulator specified.

create_spice_deck -run_path_simulation -Spectre /tools/Spectre

Running Path Simulation and Results Extraction

Path simulation can be done in two ways:

1)     Spectre™ path simulator available in ETS installation (create_spice_deck -run_path_simulation) can be used to run path simulation

2)          SPICE deck can be generated in user-specified directory, and stand-alone (outside of ETS environment) path simulation can be run using Spectre™ or any simulator that understands SPICE syntax.

Spectre™ path simulator in ETS

create_spice_deck -run_path_simulation option can be used to do on the fly path simulation in ETS.

Note: For running simulation using -run_path_simulation, it is highly recommended to specify SPICE subckt and model file using -subckt_file and -model_file options respectively. If they are not specified, design must have cdB files loaded and software will get this in-formation from cdB file. However, it is mandatory to specify subckt and model files if AAE is being used.

Besides writing a few files in the directory (specified using -outdir option) it also reports a table of timing (as shown in below example) with slew/delay/arrival column from report_timing and path simulation for correlation comparison. It will report two separate tables for launch and capture paths if report_timing –path_type full_clock is used.

 

Stand-alone Path Simulation

 

If create_spice_deck command is run without –run_path_simulation option, it will save the SPICE deck of the path (path_1_setup.sp) specified path in the specified directory (specified using –outdir option). By default, it will save the SPICE deck in ets_pathsim directory.

 

The user can run standalone path simulation using Spectre™ or any other simulator which understands SPICE syntax on the SPICE deck (path_1_setup.sp) saved by ETS.

 

Upon successful completion of path simulation, path_1_setup.measure file will be generated which can be used to extract results. Below is an example snippet of path_1_setup.measure file, which shows slew and delay measurement of two stages. 

 

Slew/delay statements in spice deck have ‘slew’ and ‘delay’ words to identify the slew and delay numbers for the respective stages in timing path. This file can be easily post-processed to extract simulation results. For example, the sum of all ‘delay’ stages will give path delay of the total path.

 

 

So, using any of two methods explained above you can easily correlate your design results using this ETS feature. There is an excellent appnote written on this topic which not only explains the correlation flow and methodology in detail, but at same time showcases an example SPICE deck with reasonable descriptions of various important constructs. It also cleanly covers various debugging techniques that can be used to resolve the correlation issues encountered, if any.

Click to visit the appnote Base delay SPICE correlation In ETS

Cadence Online Support website http://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you've likely to notice new solutions, Application Notes (Technical Papers), Videos, Manuals, etc.

 

Hope you find this information useful.

 

Thanks

 

-Mukesh

 

 

 

 

 


 


 

Five-Minute Tutorial: Creating An EM Model File

$
0
0

One of the least-fun parts of running power and rail analysis has always been coming up with the electromigration (EM) model file. In the past, this involved cracking open the process design rule manual, finding the appropriate equations, and creating a spreadsheet to calculate all the numbers needed for the various metal width and via sizes. Then, this information had to be put in the format of the model file used by Encounter Digital Implementation System (EDI) and Encounter Power System (EPS). This approach was prone to errors and involved some user decisions about what exactly to model. Frustrating, but it worked for the most part.

Now there is an easier way to get this information into an automatically-created EM model file. You'll need the iRCX file (also referred to as a unified tech file) from your foundry, and you'll need to locate both your extraction installation and your EDI installation.

When you request the iRCX information from your foundry, it may come in a directory that needs to be unzipped and untarred. Ultimately, you're looking for a file that's named something like IRCX_28NM_8M_typical.ircx. Make sure you have the right file for your process, metal stack, and extraction corner.

To locate your extraction installation, type:

>which qrc
/apps/PVE111/11.11.238/bin/qrc

To locate your EDI installation, type:

>which encounter
/apps/EDI110/11.12.000/tools/bin/encounter

Now, you'll run two translators. The first comes from the extraction installation and is called ircxtoict. This translates the iRCX file into the .ict format:

>/apps/PVE111/11.11.238/bin/ircxtoict -i IRCX_28NM_8M_typical.ict IRCX_28NM_8M_typical.ircx

Now that you have an .ict file, you can use the second translator, which comes from your EDI installation, to create the EM model file. But first, you'll need to create a small text file called conductor.widths (or another name of your choice). It looks something like this, with the order of each line being <metal_layer> <min_width> <max_width>. The width values are in microns:

M1 0.05 4.5
M2 0.05 4.5
M3 0.05 4.5
M4 0.05 4.5
M5 0.05 4.5
M6 0.05 4.5
M7 0.40 12.0
M8 0.40 12.0
AP 2.00 35.0

The metal layer names come from the .ict file, and the min and max widths come from the tech LEF. (Note that the metal layer names may differ between the .ict file and your tech LEF.)

Now we're ready for that second translator, called ict2emfiles. It converts the newly-created .ict file to the EDI/EPS EM model file format:

>/apps/EDI110/11.12.000/share/anls/gift/bin/ict2emfiles -eps -i IRCX_28NM_8M_typical.ict -w conductor.widths

This will result in a file called IRCX_28NM_8M_typical.ict.em_model, which you can then use during the RJ analysis in EDI/EPS.

This method saves a lot of time and removes potential sources of error. I was very happy to be able to create my EM model file this way, and I hope others will find this useful as well.

- Kari Summers


Quick Reference - 8 Ways to Optimize Power Using Encounter Digital Implementation (EDI) System

$
0
0

Everyone knows that the increasing speed and complexity of today's designs implies a significant increase in power consumption, which demands better optimization of your design for power. I am sure lot of us must be scratching our heads over how to achieve this, knowing that manual power optimization would be hopelessly slow and all too likely to contain errors.

Here are 8Top Things you need to know to optimize your design for power using the Encounter Digital Implementation (EDI) System.

Given the importance of power usage of ICs at lower and lower technology nodes, it is necessary to optimize power at various stages in the flow. This blog post will focus on methods that can be used to reach an optimal solution using the EDI System in an automated and clearly defined fashion. It will give clear and concise details on what features are available within optimization, and how to use them to best reach the power goals of the design.  

Please read through all of the information below before making a decision on the right approach or strategy to take. It is highly dependent on the priority of low power and what timing, runtime, area and signoff criteria were decided upon in your design. With the aid of some or all of the techniques described in this blog it is possible to, depending on the design, vastly reduce both the leakage and dynamic power consumed by the design.

Quick Reference - Top Things to Know about Power Optimization 

All of the following items discussed here in brief are covered in greater detail in Low Power Optimization in EDI System appnote posted on http://support.cadence.com/

This is a one stop quick reference and not a substitute for reading the full document.

1) VT partition uses various heuristics to gather the cells into a particular partition. Depending on how the cells get placed in a particular bucket, the design leakage can vary a lot. The first thing is to ensure that the leakage power view is correctly specified using the "set_power_analysis_mode -view" command. The "reportVtInstCount -leakage" command is a useful check to see how the cells and libraries are partitioned. Always ensure correct partitioning of cells.

2) In several designs, manually controlling certain leakage libraries in the flow might give much better results than the automated partitioning of cells. If the VT partitioning is not satisfactory, or the optimization flow is found to use more LVT cells than targeted, selectively turn off cells of certain libraries particularly in initial part of the flow i.e. preRoute flow. The user should selectively set the LVT libraries to "don't use" and run preCts/postCts optimization. Depending on final timing QOR, another incremental optimization with LVT cells enabled may be needed.

3) Depending on the importance of leakage/dynamic power in the flow, the leakage/dynamic power flow effort can be set to high or low.

setOptMode -leakagePowerEffort {low|high}
setOptMode -dynamicPowerEffort {low|high}

If timing is the first concern, but having somewhat better leakage/dynamic power is desired, then select low. If leakage/dynamic power is of utmost importance, use high.

4) PostRoute Optimization typically works with all LVT cells enabled. In case of large discrepancy between preRoute and postRoute timings or if SI timing is much worse than base timing, postRoute optimization may overuse LVT cells. So it may be worthwhile experimenting with a two pass optimization, once with LVT cells disabled, and then with LVT cells enabled. 

5) In order to do quick PostRoute timing optimization to clean up final violations without doing physical updates, use the following:

setOptMode -allowOnlyCellSwapping true
optDesign -postRoute 

This will only do cell swapping to improve timing, without doing physical updates. This is specifically for timing optimization and will worsen leakage.

6) Leakage flows typically have a larger area footprint than non-leakage flows. This is because EDI trades area with power, as it uses more HVT cells to fix timing to reduce leakage. This sometimes necessitates reclaiming any extra area during postRoute Opt to get better convergence in timing. EDI has an option to turn on area reclaim postRoute which is hold aware also and will not degrade hold timing.

setOptMode -postRouteAreaReclaim holdAndSetupAware

7) Running standalone Leakage Optimization to do extra leakage reclamation:

optLeakagePower

This may be needed if some of the settings have changed or if leakage flows are not being used.

8) PreRoute Optimization works with an extra DRC Margin of 0.2 in the flow. On some designs it is known to result in extra optimization causing more runtime and worse leakage. The option below is used to reset this extra margin in DRV fixing:

setOptMode -drcMargin -0.2

Remember to reset this margin for postRoute optimization to 0, as postRoute doesn't work with this extra margin of 0.2.  Note that the extra drcMargin is sometimes useful in reducing the SI effects, so by removing the extra margin, more effort may be needed to fix SI later in the flow.

I hope these tips help you achieve your power goals of your designs!

-Mukesh Jaiswal

Five-Minute Tutorial: Create Encounter Power System (EPS) Power-Grid Views For Standard Cells

$
0
0
In today's tutorial, I'm giving you a sample EPS (Encounter Power System) script that you can use to generate power-grid views for your standard cells. Power-grid views are used during rail analysis, with IR-Drop and EM (electromigration/current density) being the two most popular analysis types.

First, the LEF information is read in. The technology LEF needs to be read in first, then the LEF files of your standard cell libraries:

read_lib -lef tech.lef \
              stdcell_hvt.lef \
              stdcell_lvt.lef

Next, we tell EPS what kind of views we want to create. We're creating accurate standard cell views using the LEF models. We also need to point to the QRC extraction tech file, list all of our power/ground names (you may or may not have bulk pwr/gnd - you can leave that out if not), and include a layer mapping file.

set_power_library_mode \
    -accuracy accurate \
    -celltype stdcells \
    -extraction_tech_file tt_qrcTechFile \
    -lef_layermap lef_layer.map \
    -generic_power_names {VDD 0.90 VDD_SW 0.90 VDDG 0.90} \
    -generic_ground_names {VSS} \
    -generic_bulk_power_names {VNW 0.90} \
    -generic_bulk_ground_names {VPW} \
    -default_power_voltage 0.90 \
    -input_type pr_lef

Below is an example of the lef_layer.map file. The second column is what the metal and via layers are called in the QRC extraction tech file, and the fourth column is what the metal and via layers are called in the technology LEF. (The QRC techfile is not an ASCII file, but you can find the names in the .ict text file that usually comes with the QRC techfile.) In this example, the names happened to be the same between the QRC tech file and the technology LEF, but many times the layer/via names differ, especially for the upper layers.

metal   M1      lefdef M1
metal   M2      lefdef M2
metal   M3      lefdef M3
metal   M4      lefdef M4
metal   M5      lefdef M5
metal   M6      lefdef M6
metal   M7      lefdef M7
metal   M8      lefdef M8
metal   AP      lefdef AP
via     VIA1    lefdef VIA1
via     VIA2    lefdef VIA2
via     VIA3    lefdef VIA3
via     VIA4    lefdef VIA4
via     VIA5    lefdef VIA5
via     VIA6    lefdef VIA6
via     VIA7    lefdef VIA7
via     RV      lefdef RV

Finally, we can issue the characterize_power_library command which is what creates the power-grid views. The filler cells and decap cells are specified, as well as any powergate cells. (If you're not working on a power-shutoff design, you can leave out the detailed_powergate option.) We also provide the SPICE model file from the foundry, the SPICE subckt cells for our standard cells, and list the SPICE sections that contain the devices in our standard cells. (That part involves some trial and error - the first time you run the script, you may get errors for devices that are undefined. Search for those devices in the spice model file, then include the section they are found in, such as "ttg_hvt".) You'll notice we reference a "stdcell.list" file - this is a simple text file with one std cell name per line. 

characterize_power_library \
    -celllist_file stdcell.list \
    -library_name accurate_std.tt_0p90v \
    -filler_cells { FILL* } 
    -decap_cells { DCAP* } 
    -detailed_powergate { \
                   {HEADBUFx16 VDDG VDD} \
                   {HEADBUFx32 VDDG VDD} \
                   } \
    -spice_models cln28hpm_1d8_elk_v1d0_2p1.l \
    -spice_corners {ttg ttg_lvt ttg_hvt TT TT_hvt TT_lvt Total Total_lvt Total_hvt} \
    -spice_subckts { \
                     stdcell_hvt_typical_25c.spice \
                     stdcell_lvt_typical_25c.spice \
                    }


To run the script (let's call it create_stdcell_pgv.tt.tcl), just start EPS and type:
source create_stdcell_pgv.tt.tcl

The resulting power-grid library from this sample script can be used for power and rail analysis in the tt_0p90v corner. You'll need to create a power-grid view library for each process/voltage you want to run power analysis in. So if you need to run IR-drop in the ss_0p81v corner, for example, make a copy of the script and edit it to refer to ss SPICE models, the ss qrcTechFile, and change all the voltages to 0.81. 

Once the script completes successfully, check the .report file that was generated to make sure that your cells report PASS. If you're using powergate cells, make sure they were recognized as such.

I hope this has provided a quick-start to getting your standard cell power-grid views created! 
- Kari Summers 


Here's the whole script at once, so you can just cut and paste into a file, and start editing for your specific design.


read_lib -lef tech.lef \
              stdcell_hvt.lef \
              stdcell_lvt.lef

set_power_library_mode \
    -accuracy accurate \
    -celltype stdcells \
    -extraction_tech_file tt_qrcTechFile \
    -lef_layermap lef_layer.map \
    -generic_power_names {VDD 0.90 VDD_SW 0.90 VDDG 0.90} \
    -generic_ground_names {VSS} \
    -generic_bulk_power_names {VNW 0.90} \
    -generic_bulk_ground_names {VPW} \
    -default_power_voltage 0.90 \
    -input_type pr_lef

characterize_power_library \
    -celllist_file stdcell.list \
    -library_name accurate_std.tt_0p90v \
    -filler_cells { FILL* } 
    -decap_cells { DCAP* } 
    -detailed_powergate { \
                   {HEADBUFx16 VDDG VDD} \
                   {HEADBUFx32 VDDG VDD} \
                   } \
    -spice_models cln28hpm_1d8_elk_v1d0_2p1.l \
    -spice_corners {ttg ttg_lvt ttg_hvt TT TT_hvt TT_lvt Total Total_lvt Total_hvt} \
    -spice_subckts { \
                     stdcell_hvt_typical_25c.spice \
                     stdcell_lvt_typical_25c.spice \
                    }

exit

CDNLive High-Performance Track: Do You Have What it Takes to Get Your High-Performance SoC to Market?

$
0
0

Implementing SoCs with embedded processors at advanced nodes has become increasingly difficult. This is due to the complexity of the design functionality as well as the low power and increased performance requirements driven by a plethora of end-user applications in modern hand-held devices. Path-breaking trends in ARMv8 64-bit processor based microservers for power efficient cloud computing/data centers and high-end content generating superphones and tablets have thrown new curveballs at chip designers. As we're aware, battery technology hasn't kept pace with Moore's Law and as a result some amazing recent advances in process technology (such as FinFET, double patterning, etc.) and EDA tools are helping offset the difficulties in designing power efficient yet high-performance SoCs.

I'm really pleased to see a lineup of exciting papers in the High-Performance Digital Implementation track at the upcoming CDNLive Silicon Valley in Santa Clara, March 12, from some of the heavy hitters in the industry. These presentations describe the implementation of SoCs for a multitude of designs such as ARM Cortex-A core-based applications processors with state-of-the-art GPUs for demanding high-end smartphones and tablets; the world's first multi-Gigahertz ARMv8 64-bit processor architecture based server-on-chip for cloud computing/data centers; and complex networking ASICs, to name a few.

Rounding off the sessions are interesting papers from ARM on implementing their latest 64-bit Cortex-A57 processor based on the ARMv8 architecture and processor optimization pack (POP) IP development for ARM's big.LITTLETM paradigm -- with Cortex-A15 and Cortex-A7 processors respectively -- with Cadence's Encounter digital flows. These presentations exemplify the strong partnership and collaboration between ARM and Cadence in developing implementation reference methodologies (iRM) that ease designing ARM processors into leading-edge SoCs and accelerate time-to-market.

A common thread that the audience will hear is the significant power, performance  and area (PPA) improvements customers have been able to achieve with some of the key Cadence tools such as RTL Compiler-Physical (RCP) and Encounter Digital Implementation (EDI) System featuring the latest GigaOpt physical optimization and Clock Concurrent Optimization (CCOpt) technologies. RCP, GigaOpt and CCOpt form the three pillars of Cadence's strong offering for implementing complex and high-performance designs in silicon. Some of the key advances include extending GigaOpt to the entire optimization flow (pre- and post-route) with full multi-CPU support, route-driven and layer-aware optimization at advanced nodes, and native integration of CCOpt in EDI, all leading to significant PPA improvements. Stop by the Cadence booth at the partner expo to learn more about the upcoming EDI release 13.1 highlights!

I've always been fascinated by our customers' end-products and verticals (going beyond just chip design and into adjacencies) where Cadence's tools have been used for bleeding-edge designs. So when companies such as AppliedMicro, ARM, Avago and NVidia come to town to graciously share their design experiences, one can't help but sit up and take notice! CDNLive gives a wonderful opportunity for attendees to understand the intricacies behind implementing these complex SoCs, including the challenges faced-from synthesis, design planning to final implementation, signoff and everything in between-and how they surmounted them. Key takeaways include lessons learned and best practices that the audience can readily deploy into their own designs and methodologies. That's the beauty of CDNLive-it provides ample opportunities to learn from fellow designers while extending one's professional network. What more can you ask for?

On Wednesday, March 13, the R&D luncheon offers a unique opportunity for our customers to sit down with our R&D and product engineers to discuss the chip-design problems of the day. We've set up thematic tables to cater to different areas of focus, including Advanced Node (28/20/16/14nm), Clock Concurrent Optimization, Implementing GHz+ ARM Cortex-A processor based designs, low-power, mixed-signal, signoff, and more! This offers an informal atmosphere for Cadence to also better understand our customers' requirements.

Here's a sneak peek into the paper presentations in the high-performance track on March 12, 2013:

  • In Session HP105 (4:45-5:35pm), Sumbal Rafiq of AppliedMicro will present "X-Gene: Realizing a complex high-performance and power efficient 64-bit multicore ARMv8 based server-on-chip solution in silicon". This revolutionary multi-gigahertz design targets the extremely demanding cloud-computing/datacenter market. With multi-core ARMv8 CPUs, network interface controller (NIC), high-speed interconnect fabric, memory and other peripherals, the design complexity has reached unprecedented levels, throwing several new challenges in chip integration and meeting stringent PPA metrics. AppliedMicro and Cadence have collaborated from an early stage of development to deploy an RTL-to-GDSII flow based on Cadence's Encounter Digital tools to successfully implement and tape out the design. The proof's in working silicon!
  • Session HP 104 (9:00-9:50am) titled"High Performance/Low Power Implementation of ARM Cortex-A15 and Cortex-A7 with ARM POP IP for ARM big.LITTLE Systems and Applications" will be presented by Sathyanath Subramanian from ARM. Sathya will talk about ARM's big.LITTLE heterogeneous processing concept and how the ARM POP IP optimized with Cadence's flows at advanced nodes provides designers a head start and a PPA boost for implementing designs with Cortex-A15 and Cortex-A7 processors.
  • In Session HP103 (2:30-3:20pm), Brent McKanna of ARM will present Targeting High Frequency and Power Efficient Implementations for ARM's High Performance Cortex-A57 Processor. Cortex-A57 is ARM's latest and highest performing ARMv8 64-bit processor targeting the enterprise server and high-end smartphone/tablet applications, where maintaining high power efficiencies at superior performance points are critical. ARM and Cadence have collaborated throughout the development of Cortex-A57 to create an RTL-to-signoff flow based on Cadence Encounter design tools. The paper describes the techniques used for handling the increased complexity of larger ARM cores and for closing designs on advanced nodes such as 28nm.
  • Session HP102 (1:30-2:20pm) titled Advanced Strategies for Timing Closure Utilizing New GigaOpt Features will be presented by Jack Benzel of Avago Technologies. Given the explosive growth in the number of large memory macros and the growing dominance of RC delays limiting long-haul wire performance at advanced nodes, new strategies are required to address low-latency architectures. Jack's paper will cover several of EDI System's new GigaOpt features including low-RC layer promotion, advanced re-buffering, TNS focus, path balancing, and path compaction. Get exposed to real-world 28nm examples demonstrating before/after QOR improvements.
  • In Session HP101 (3:45-4:35pm), Santosh Navale of NVidia will present Implementing high performance GHz+ mobile applications processors and GPU with clock concurrent design techniques. Hear about how NVidia changed their clocking methodology to tackle several hundred complex generated and interacting clock signals, which form the back-bone of modern applications processors. Hear how they handled on-chip-variation, complex clock gating, multi-mode/multi-corner, and low-power requirements while improving chip performance with the Cadence clock concurrent optimization (CCOpt) technology on multi-Gigahertz ARM CPU based mobile applications and GeForce GPU processors.

Registration and program information for CDNLive Silicon Valley is available here.

Vasu Madabushi

Five-Minute Tutorial: Set Flip-Chip Bumps as Voltage Sources in EPS/EDI Rail Analysis

$
0
0

When running power and rail analysis for a flip chip, we used to have to spend some time creating the voltage sources. It wasn't too terrible; usually we would output the bumps into a Cadence Encounter Digital Implementation (EDI) .io file, then use a perl script to filter out the pwr/gnd bumps and create the voltage source file format. The script would need a bit of editing from project to project, but nothing too complicated. We ended up with a voltage source file, with one point-source per bump. However, it is much easier these days to create voltage sources for a flip chip to be used in the Cadence Encounter Power System (EPS) Rail Analysis (run either from EPS directly, or through EDI.) It is also more accurate, since the bumps get modeled with several points in a resistor network. (This will avoid false EM violations.)

The LEF file of your flip chip bump will be used as a reference. Bumps are usually octagonal, although sometimes are represented as squares. Here is an example bump LEF, which I will use to illustrate the process. (Note that a polygon shape is used to create an octagonal bump, but the corresponding coordinates that would have been used for a square are commented out.)

VERSION 5.6 ;
BUSBITCHARS "[]" ;
DIVIDERCHAR "/" ;
UNITS
  DATABASE MICRONS 1000 ;
END UNITS

MACRO BUMP
 CLASS COVER BUMP ;
 FOREIGN BUMP -49.45 -49.45 ;
 ORIGIN 49.45 49.45 ;
 SIZE 98.9 BY 98.9 ;
 PIN PAD
    DIRECTION INOUT ;
    USE SIGNAL ;
    PORT
      LAYER AP ;
        #RECT -49.45 -49.45 49.45 49.45 ;
        POLYGON -20.49 -49.45 20.49 -49.45 49.45 -20.49 49.45 20.49 20.49 49.45 -20.49 49.45 -49.45 20.49 -49.45 -20.49 ;
    END
  END PAD
END BUMP

END LIBRARY


First, create a file called bump.padfile. This file contains one line, the MACRO name of the bump from the LEF. It should look like this:

BUMP

Next, create a file called bump.srcfile. It should look like this:

CELL BUMP
  NET PAD
  PORT {
    AP -49.45 -49.45 49.45 49.45
  }

Make sure the CELL and NET names match your bump LEF. The NET name is the PIN name from the LEF. The port layer name (AP here) is the same layer from the LEF. Remember the commented-out square coordinates that I mentioned in the LEF example above? Here is where that's useful: the coordinates of the PORT shape should be a square that encloses the octagonal bump.

Now, create the bump powergrid view. Here is a sample script, called create_bump_pwrgrid.ss0p81v.tcl:

read_lib -lef tech.lef \
   BUMP.lef

set_power_library_mode \
    -accuracy fast \
    -celltype allcells \
    -extraction_tech_file cworst.qrcTechFile \
    -lef_layermap lef_layer.map \
    -generic_power_names {VDD 0.81} \
    -generic_ground_names {VSS} \
    -input_type pr_lef

characterize_power_library \
    -celllist_file bump.list \
    -padvsrcfile bump.srcfile \
    -libgen_command_file libgen.inc \
    -output_directory fast_bump.ss_0p81v

A few notes about the files referenced in this script:

Finally, when running rail analysis, use the bump.padfile in your set_power_pads command. The same padfile can be used for any rail:

set_power_pads \
  -net VDD \
  -format padcell \
  -file bump.padfile

set_power_pads \
  -net VSS \
  -format padcell \
  -file bump.padfile


All bumps will then be recognized as voltage sources, with multiple points inside the bump shape. I hope this has helped simplify your rail analysis flow!

- Kari Summers

Answers to Top 10 Questions on Performing ECOs in EDI System

$
0
0

Applying ECOs to a design can be complex, stressful and error prone so it's important to apply the right tools and flow to implement the changes successfully. EDI System provides multiple ECO flows to physically implement ECOs efficiently and accurately based on your design requirements. And adding a tool such as Encounter Conformal ECO Designer or the Encounter Timing System's MMMC Signoff ECO capability can lead to faster design closure with fewer iterations.

I field many customer questions related to implementing physical ECOs with EDI System. In this blog I provide answers to 10 of the most common questions. Do you have any tips to share on performing ECOs with EDI System? If so, please post it as a comment below.

Thanks,

Brian


1. What's the best place to find details on how to perform ECOs in EDI System?

The EDI System User Guide has a chapter dedicated to ECO Flows (Cadence Online Support access required). It describes several flows depending on whether it is a pre-mask or post-mask ECO, whether the changes are coming from a new Verilog netlist, DEF or ECO file, and whether gate array cells are being used or not. If you are new to ECO flows in EDI System then the ECO Flows chapter is the place to start!

It's worth mentioning here ECOs can be implemented using the super-command ecoDesign or by running each command individually (init_design, ecoDefin, ecoPlace, ...). Both methods are described in the User Guide.

2. What's the difference between a pre-mask and post-mask ECO flow?

A pre-mask ECO is when you are making changes to the design before any masks have been made. With a pre-mask ECO you are free to make changse to any layer thus providing you more freedom to implement the ECO.

A post-mask ECO is when you are making changes to a design after masks have been made. Therefore, you want to limit the changes to specific layers so you do not need to re-make all the masks. In a post-mask ECO flow you can utilize existing spare cells which where placed in the design to avoid changes to layers Metal1 and below. You can also instruct the router to which layers it can use to perform ECO routing and which must remain frozen.

3. How do I apply changes made in my RTL to the physical design through ECO?

Encounter Conformal ECO Designer is recommended for performing complex ECOs originating from RTL. It interfaces with RTL Compiler and EDI System to perform the logical and physical ECOs while leveraging Conformal's logical equivalency abilities to ensure the ECO was successful for both the front-end and back-end signoff.

4. How does EDI System identify spare cells in a post-mask ECO flow?

Spare cells should have a unique string in their instance name to identify them. Then the command specifySpareGate or ecoDesign -useSpareCells patternName is run to identify the spare instances. For example, if all spare cells have _spare_ in their name then they are identified using:

  specifySpareGate -inst *_spare_*

OR

  ecoDesign -spareCells *_spare_* ...

Note if you are making manual ECO changes to a netlist and converting a spare cell to a logical instance, it's important to change the instance name. Otherwise, the instance may be identified as a spare cell if a future ECO is performed because it still has the spare cell instance name.

5. How does EDI System identify the changes in the design?

During the ecoDefin step the existing netlist (new netlist) is compared against the original placed and routed design. A summary of differences is output to the log file and a detailed report file is output to the local directory. You can review the report file to see a list of all the differences.

6. How do I use spare cells or gate array cells during placement?

Spare cells are identified using specifySpareGate. Then use the -useSpareCells true option when running ecoPlace to instruct it to swap the unplaced cells with spare cells of the same cell type:

  specifySpareGate -inst *_spare_*
  ecoPlace -useSpareCells true


Gate array style filler cells can be programmed with metal layers so the poly/diffusion and lower layers are not changed, and only the metal and via layer masks need to be modified. If you are using gate array spare cells the flow depends on the SITE type used by the gate array cells.

If your design has GA Cells which utilize a SITE type (i.e. GACORE) different from normal standard cells (i.e. CORE) then use:

  ecoPlace -useGACells GACORE

If your design has GA cells which utilize the same SITE type as standard cells:  ecoPlace -useGAFillerCells {List of GAFillerCells}

Reference the User Guide for the complete flow.

7. Is ecoPlace -useSpareCells true timing driven?

ecoPlace will choose the spare cells to minimize wire length but is not timing driven. After ecoPlace you can run ecoSwapSpareCell to relocate an instance to the location of another spare cell of the same type. Alternatively, you can run ecoRemap in place of ecoPlace. ecoRemap is timing driven and automatically analyzes the functionality of the newly added cells and remaps them to available spare cells. The software analyzes the logic and performs changes to improve timing and minimize DRVs.

8. How do I freeze certain metal layers during routing?

In a post-mask ECO Flow run ecoRoute with the -modifyOnlyLayers option to specify which layers it is allowed to modify. For example, to route using only Metal1 through Metal3:

  ecoRoute -modifyOnlyLayers 1:3

9. How does ECO routing deal with metal fill?

When performing a post-mask ECO flow, ecoRoute will ignore the metal fill while routing. This will likely cause DRC violations between the ECO routes and metal fill. To fix these violations, run verifyGeometry followed by the the trimMetalFill command. This will cut back the metal fill from the ECO routing to fix the violations.

10. Does EDI System support interactive (maual) ECOs?

Yes, EDI System provides a number of interactive commands to both evaluate and commit ECO changes. See the Interactive ECO chapter of the EDI System User Guide for details.

When performing interactive ECOs make sure setEcoMode is set as desired. Here are some specific options to pay attention to and tips to speed up run time when implementing a series of ECOs:

  setEcoMode -updateTiming - Default is false allowing you to wait until all ECOs are performed to run timing analysis. If set to true, timing analysis is run after each ECO command.
  setEcoMode -honorDontTouch, -honorDontUse, -honorFixedStatus - The default for all of these is true. So if you find you cannot make a change, check if any of these apply.
  setEcoMode -batchMode - Sets this to true to improve runtime if you are performing many ECOs.

 

Viewing all 347 articles
Browse latest View live